Technical Deep Dive

Why LLMs Struggle with Ruby Code: The Training Data Problem

Tony Dong
August 27, 2025
12 min read
Share:
Featured image for: Why LLMs Struggle with Ruby Code: The Training Data Problem

Despite Ruby's significant presence on GitHub and Stack Overflow, large language models consistently underperform when generating, reviewing, or debugging Ruby code. Recent research reveals that this isn't due to lack of training data, but rather fundamental issues with how that data is structured, evaluated, and utilized during model training. Here's why your Ruby projects aren't getting the AI assistance they deserve—and how modern approaches are solving this problem.

Key Research Findings

  • Benchmark Bias: 93.5% of LLM solutions favor Python due to Python-centric evaluation datasets
  • Performance Gap: Ruby ranks in lower tier on MBXP/MultiPL-E benchmarks despite large dataset presence
  • Training Data Quality: Ruby has quantity but lacks the structured, high-quality examples that drive model performance
  • Limited Language Diversity: LLMs choose only 6-14 programming languages despite hundreds being available

The Ruby Performance Paradox

Ruby presents a fascinating paradox in the world of AI-assisted programming. According to recent research on programming language representation in LLM datasets, Ruby has "quite large" representation in public LLM datasets and maintains "a large presence on GitHub and Stack Overflow" [1]. Yet, when it comes to actual performance, the story tells a different tale.

The performance of LLMs with Ruby on the MBXP / Multilingual HumanEval and MultiPL-E benchmarks is lower than the first tier languages [1]. Even more telling, Ruby isn't even included in major benchmarks like BabelCode / TP3 and HumanEval-X, suggesting it's not considered a priority language for LLM evaluation.

The Python Supremacy Problem

The root cause of Ruby's poor performance lies in what researchers call "Python supremacy" in LLM training and evaluation. Recent studies reveal a stark reality: "For each dataset apart from AixBench, all LLMs gave solutions in Python for at least 93.5% of problems" [2].

This isn't accidental. As research shows, "The heavy preference for Python stems from LLMs being created with a focus on achieving state-of-the-art results on widely-used benchmarks, the vast majority of which are Python based, causing their training data to be saturated with Python code" [2].

Python vs Ruby in Training Data

Python Advantages

  • • Dominates evaluation benchmarks
  • • 93.5%+ of benchmark solutions
  • • Extensive high-quality documentation
  • • Structured learning resources
  • • Academic research preference

Ruby Challenges

  • • Lower-tier benchmark performance
  • • Excluded from major evaluations
  • • Quality vs quantity data issue
  • • Less structured learning content
  • • Framework-specific complexity

Training Data Quality vs. Quantity

The issue isn't simply about having enough Ruby code in training datasets. Major datasets like Starcoder include 783 GB of code written in 86 programming languages [3], with Ruby well-represented. The problem lies in the quality and structure of that representation.

Research indicates that "the current weak reasoning abilities of Large Language Models, combined with a possible scarcity of sources on the subject, and even worse, potentially many low-quality sources, collectively result in this meager outcome" [1] when it comes to Ruby code generation.

The Documentation Problem

Unlike Python, which benefits from extensive, well-structured documentation and educational content, Ruby's training data often lacks the contextual richness that helps LLMs understand not just syntax, but idiomatic usage patterns. "Without solid expertise in your chosen programming language (especially a nuanced one like Ruby), an LLM is merely guessing" [4].

Ruby's Unique Challenges for LLMs

Analysis of Ruby developer experiences and specialized prompting rules reveals specific patterns where LLMs consistently struggle with Ruby code generation. These challenges go beyond simple syntax to fundamental differences in how Ruby emphasizes expressiveness and convention over explicit configuration.

1. Non-Idiomatic Code Generation

LLMs frequently generate verbose, Python-like code that technically works but violates Ruby conventions. For example, generating explicit getter/setter methods instead of using attr_reader, or creating unnecessary local variables instead of leveraging Ruby's expressive method chaining.

2. Rails Convention Failures

Despite Rails' "convention over configuration" philosophy, LLMs often miss critical patterns: improper use of ActiveRecord associations, violating RESTful routing conventions, or placing business logic in controllers instead of service objects or models.

3. Ruby Style Guide Ignorance

Community-developed resources like the Ruby Style Guide establish clear idioms, but LLMs trained on diverse codebases often default to patterns that technically work but feel "un-Ruby-like" to experienced developers. This includes inconsistent naming conventions and failure to leverage Ruby's syntactic sugar.

4. Context-Dependent Magic Methods

Ruby's metaprogramming creates methods that exist only at runtime or depend on specific gem configurations. LLMs struggle with Rails' method_missing patterns, dynamic attribute creation, and framework-specific DSLs that require deep contextual understanding.

5. Testing Framework Confusion

Ruby's diverse testing ecosystem (RSpec, Minitest, Test::Unit) each has distinct syntax and conventions. LLMs often mix patterns between frameworks or generate tests that lack Ruby testing idioms like proper use of let, subject, or context blocks.

Real-World Impact: Why Ruby Developers Need Better Prompting

The challenges above have led to specialized prompting strategies in the Ruby community. Modern Ruby development environments like Cursor require explicit rules stating: "Write concise, idiomatic Ruby code with accurate examples. Follow Rails conventions and best practices" [9].

These rules specifically address LLM weaknesses by mandating adherence to the Ruby Style Guide, specifying Ruby 3.x features, and requiring explicit Rails MVC patterns—highlighting areas where general LLM training has proven insufficient for Ruby development.

Language Representation Statistics

Recent data analysis reveals the stark differences in how programming languages are represented and prioritized in LLM systems:

LanguageGitHub/SO PresenceDataset RepresentationBenchmark PerformanceLLM Preference
PythonVery HighHigh QualityTop Tier93.5%+ Solutions
JavaScriptVery HighHigh QualityTop Tier8/40 Instances
RubyHighMixed QualityLower TierLimited

Sources: Data compiled from research on LLM programming language preferences and multilingual benchmark performance [1] [2].

The Limited Language Diversity Problem

The scope of the problem extends beyond just Ruby. Research shows that "In general the range of programming languages that LLMs choose to use is limited, hundreds of programming languages get contributions on GitHub every year, but LLMs only choose to use 6-14 different ones" [2].

This limitation stems from "models being built to prefer more user-friendly languages" [2], with Python and JavaScript dominating due to their prevalence in educational content and benchmark datasets.

How Modern AI Systems Address Ruby's Challenges

The good news is that advanced AI systems are developing sophisticated approaches to overcome these training data limitations. The key lies in combining multiple techniques that go beyond traditional pre-training approaches.

Retrieval Augmented Generation (RAG) for Ruby

RAG addresses the training data quality problem by dynamically retrieving relevant Ruby code examples, documentation, and patterns during inference. Instead of relying solely on patterns learned during training, RAG systems can access:

  • Ruby-specific framework documentation and examples
  • Idiomatic code patterns from high-quality Ruby repositories
  • Community best practices and conventions
  • Project-specific coding standards and patterns

Research shows that "RAG involves the process of effectively integrating context from retrieved passages with the current generation task, and retrieval augmentation can be applied in many different stages such as pre-training, fine-tuning, and inference" [5].

Post-Training Fine-Tuning Approaches

Post-training fine-tuning specifically addresses Ruby's underrepresentation by training models on curated, high-quality Ruby datasets. Advanced approaches like RAFT (Retrieval Augmented Fine-Tuning) combine the benefits of both RAG and fine-tuning.

RAFT "combines RAG and fine-tuning and provides a training recipe that improves the model's ability to answer questions in an 'open-book' domain setting, teaching LLMs to get smarter about specific topics while improving in-domain RAG performance" [6].

Propel's Approach to Ruby Code Understanding

Propel addresses Ruby's LLM challenges through a sophisticated combination of RAG and post-training fine-tuning specifically designed for Ruby codebases. This dual approach tackles both the quantity and quality issues that plague Ruby in general-purpose LLMs.

Propel's Ruby-Specific Architecture

1

Ruby-Centric RAG System

Propel maintains specialized knowledge bases of Ruby patterns, Rails conventions, gem documentation, and Ruby community best practices that are accessed during code review and generation.

2

Targeted Post-Training

The system undergoes additional training on curated Ruby codebases, focusing on idiomatic patterns, metaprogramming usage, and framework-specific conventions that are underrepresented in general training data.

3

Dynamic Context Integration

By combining retrieved Ruby-specific context with fine-tuned understanding, Propel can provide code reviews and suggestions that understand both Ruby syntax and the nuanced conventions of different Ruby frameworks and communities.

Practical Benefits for Ruby Developers

This specialized approach delivers tangible improvements for Ruby development teams:

  • Idiomatic Ruby Code: Suggestions follow Ruby conventions and best practices
  • Framework Awareness: Understanding of Rails, Sinatra, and other framework patterns
  • Metaprogramming Support: Proper handling of dynamic Ruby features
  • Gem Integration: Knowledge of popular gem APIs and usage patterns
  • Testing Conventions: Support for RSpec, Minitest, and Ruby testing idioms

The Future of Ruby AI Assistance

As the research shows, the Ruby community doesn't need to accept second-class AI assistance. The combination of RAG and post-training approaches demonstrates that language-specific optimization can overcome the inherent biases in general-purpose LLM training.

The Ruby ecosystem is also adapting, with tools like LangChain.rb enabling Ruby developers to build sophisticated AI applications [7]. As one implementation showed, "The Ruby RAG model displayed high accuracy in generating contextually relevant and coherent text, with the integration of Qdrant effectively augmenting the context-awareness of the language model" [8].

What This Means for Ruby Teams

Ruby teams no longer need to accept inferior AI assistance due to training data limitations. By choosing AI tools that specifically address Ruby's challenges through RAG and post-training approaches, teams can achieve:

Code Quality Improvements

  • • Idiomatic Ruby suggestions
  • • Framework-specific best practices
  • • Proper metaprogramming patterns
  • • Ruby testing conventions

Development Acceleration

  • • Faster code review cycles
  • • Reduced debugging time
  • • Better refactoring suggestions
  • • Improved code documentation

Frequently Asked Questions

Why do LLMs perform poorly on Ruby code compared to Python?

LLMs struggle with Ruby due to training data bias toward Python-centric benchmarks, lower representation in quality datasets, and evaluation systems that prioritize Python performance. Despite Ruby's presence on GitHub and Stack Overflow, the training data quality and benchmark focus heavily favor Python.

Does Ruby have enough representation in LLM training datasets?

Ruby has significant representation in major training datasets like Starcoder (783 GB across 86 languages) and substantial presence on GitHub and Stack Overflow. However, the quality of Ruby training data is lower than first-tier languages, affecting LLM performance despite adequate quantity.

How does Propel improve Ruby code understanding?

Propel uses RAG (Retrieval Augmented Generation) combined with post-training fine-tuning specifically for Ruby codebases. This provides access to Ruby-specific patterns, frameworks, and idioms that may be underrepresented in general LLM training data, resulting in better Ruby code generation and review.

What is the difference between RAG and fine-tuning for Ruby code?

RAG retrieves relevant Ruby code examples and documentation during inference, while fine-tuning adjusts model weights on Ruby-specific datasets. Propel combines both: RAG provides immediate access to Ruby patterns, while post-training teaches the model Ruby-specific syntax and conventions.

Conclusion

The poor performance of LLMs on Ruby code isn't due to lack of training data, but rather the quality, structure, and evaluation bias toward Python-centric benchmarks. While Ruby maintains significant representation in major datasets, the combination of benchmark bias, quality issues, and Python supremacy in evaluation has created a performance gap.

However, modern approaches using RAG and post-training fine-tuning demonstrate that these limitations can be overcome. By specifically addressing Ruby's unique challenges—from metaprogramming complexity to framework-specific idioms—advanced AI systems can provide Ruby developers with the intelligent assistance their language and community deserve.

References

[1] Continue.dev. "LLMs are helpful with Python, but what about all of the other programming languages?" Continue Blog. https://blog.continue.dev/programming-languages/
[2] ArXiv. "LLMs Love Python: A Study of LLMs' Bias for Programming Languages and Libraries." arXiv:2503.17181v1. https://arxiv.org/html/2503.17181v1
[3] ProjectPro. "15+ High-Quality LLM Datasets for Training your LLM Models." https://www.projectpro.io/article/llm-datasets-for-training/1027
[4] RubyCademy. "My two cents on coding and LLMs. Why real expertise still matters." Medium. https://medium.com/rubycademy/my-two-cents-on-coding-and-llms-2ae932c9f0d2
[5] Prompt Engineering Guide. "Retrieval Augmented Generation (RAG) for LLMs." https://www.promptingguide.ai/research/rag
[6] SuperAnnotate. "RAFT: Combining RAG with fine-tuning." https://www.superannotate.com/blog/raft-retrieval-augmented-fine-tuning
[7] GitHub. "patterns-ai-core/langchainrb: Build LLM-powered applications in Ruby." https://github.com/patterns-ai-core/langchainrb
[8] Rayyan Shaikh. "Step-by-Step Guide to Building LLM Applications with Ruby (Using Langchain and Qdrant)." Medium. https://medium.com/@shaikhrayyan123/step-by-step-guide-to-building-llm-applications-with-ruby-using-langchain-and-qdrant-5345f51d8a76
[9] Cursor Directory. "Rails/Ruby Cursor Rules." Cursor.directory. https://cursor.directory/rails-ruby-cursor-rules

Continue Learning

Autonomous Code Review Tools: Complete Enterprise Guide →

Learn how modern AI systems handle code review across different programming languages

Top-Rated AI-Driven Code Refactoring Tools →

Compare AI-powered refactoring tools and their language-specific capabilities

AI Code Review Best Practices →

Essential practices for implementing AI-powered code reviews effectively

Ready to Transform Your Code Review Process?

See how Propel's AI-powered code review helps engineering teams ship better code faster with intelligent analysis and actionable feedback.

Explore More

Propel AI Code Review Platform LogoPROPEL

The AI Tech Lead that reviews, fixes, and guides your development team.

SOC 2 Type II Compliance Badge - Propel meets high security standards

Company

© 2025 Propel Platform, Inc. All rights reserved.