Jatin Ganhotra

I am a Senior Software Engineer at IBM Research, AI for Code, Thomas J. Watson Research Center, where I lead the development of autonomous SWE-Agents for intelligent code generation, issue resolution, and software testing.

I am the project lead and architect of iSWE-Agent, IBM Research’s software engineering agent that secured the top position on the Multi-SWE-Bench and SWE-PolyBench Java leaderboards. I also created SWE-Bench-Arena, a platform for rigorous blind evaluation of AI-generated code — assessing quality dimensions like maintainability, readability, and production readiness beyond just passing tests.

My research interests span AI-driven software engineering, conversational AI, dialog systems, and natural language processing. I have published across top-tier AI and software engineering venues, including ICML, ICSE, EMNLP, ACL, NAACL, TACL, Interspeech, and ICASSP.

I earned my M.S. in Computer Science from the University of Illinois, Urbana-Champaign and my B.Tech in Computer Engineering from the National Institute of Technology (NIT) Kurukshetra.

You can learn more about my current work here: IBM SWE Agents and iSWE-Agent.

Latest posts

News

Feb 06, 2026 iSWE-Agent, IBM Research’s software engineering agent, achieved Rank #1 on the SWE-PolyBench (full) Java leaderboard with a 33.33% resolution rate. On the Verified subset, iSWE-Agent scores 46.38% on Java, matching Atlassian Rovo Dev and significantly outperforms PrometheusV1.2 + GPT-5 (33.33%).
iSWE-Agent achieves Rank-1 on SWE-PolyBench (Full) Java leaderboard (6 February 2026)
iSWE-Agent on SWE-PolyBench (Verified) Java — matches Atlassian Rovo Dev, outperforms PrometheusV1.2
Dec 01, 2025 iSWE-Agent, IBM Research’s software engineering agent, achieved Rank 1 on the Multi-SWE-Bench Java leaderboard with a 33% resolution rate, substantially outperforming the previous best score of 28.9%.
iSWE-Agent achieves Rank-1 on Multi-SWE-Bench Java leaderboard (1 December 2025)
Related resources:
  1. IBM Research Blog - iSWE-Agent
Sep 03, 2025 SWE-Bench-Arena, a platform for blind evaluation of AI-generated code patches, is now live. Unlike benchmarks that only measure test-pass rates, SWE-Bench-Arena evaluates patches across five production-relevant dimensions: correctness, maintainability, readability, performance, and simplicity.
SWE-Bench-Arena — blind evaluation of AI-generated code patches
Related resources:
  1. LinkedIn article: “The Hidden Cost of AI Coding Tools: Quality vs. Speed”
Oct 22, 2024 IBM AI Agent SWE-1.0, previewed at IBM TechXchange, showcases how AI can tackle GitHub issues in minutes using only open-source LLMs. Related resources:
  1. IBM Research Blog - IBM SWE Agents
  2. LinkedIn posts: Announcement | Live demonstration
  3. YouTube video: “IBM AI Agent SWE-1.0 Demonstration for resolving GitHub issues”
Oct 16, 2024 IBM AI Agent SWE-1.0, using only open-source LLMs, achieves a remarkable 23.67% resolution rate on SWE-Bench. Without relying on proprietary models, it represents a significant milestone in AI for software engineering. As the Architect and Technical Lead of this innovative solution, I’m excited to see how future SWE-Agents will build upon the foundation we’ve established with open-source LLMs.