
In this post, we'll explore two advanced JIT compilers for Ruby: TruffleRuby and JRuby, looking at their benefits and drawbacks. We'll also briefly touch on the recently announced ZJIT compiler.
But before we get started, let's define JIT compilation.
What is JIT compilation?
JIT (Just-In-Time) compilation is a technique that combines aspects of code interpretation and traditional compilation. In statically compiled languages that utilize AOT (ahead-of-time) compilation, source code is translated into machine code before execution. Optimizations are performed during this compilation stage, and the resulting machine code is executed directly at runtime with no further changes.
In purely interpreted languages, such as some implementations of Ruby or Python, source code is executed directly by an interpreter at runtime, without being compiled into machine code in advance.
JIT compilation bridges these approaches by combining the speed of compiled code with the flexibility of interpretation. During execution, the JIT compiler gathers runtime information, such as which parts of the code are executed frequently (hotspots). The compiler translates these hotspots into optimized machine code. Subsequent executions of these parts of the code use the machine code instead of interpretation, significantly improving performance over time.
This process means that with JIT compilation, programs might start out slower, but their performance improves as more code is optimized during runtime. This approach to optimization is what gives JIT compilation its name: "just in time".
Why Was JIT Compilation Introduced to Ruby?
The introduction of a Just-In-Time (JIT) compiler in Ruby (specifically in Ruby 2.6) was motivated by several longstanding performance challenges. These problems were rooted in Ruby’s dynamic nature and interpreter-based design, which, while making the language highly flexible and developer-friendly, also introduced certain inefficiencies. Some of the reasons for the introduction of JIT compilation in Ruby include:
- Interpretation Overhead: Ruby’s MRI (Matz’s Ruby Interpreter) traditionally interprets code line by line, which leads to significant overhead during execution compared to compiled languages. Every time a Ruby program runs, the interpreter has to parse and execute the code dynamically, resulting in slower performance for computationally intensive tasks.
- Global Virtual Machine Lock (GVL): The GVL (formerly known as GIL, Global Interpreter Lock) in MRI prevents true parallel execution of Ruby threads. While this is not directly addressed by the JIT, the inability to use multiple CPU cores efficiently exacerbated the demand for better single-threaded performance, which the JIT could help improve.
- Competition from Other Languages: Languages like Python and JavaScript introduced highly efficient JIT compilers (e.g., PyPy and V8) that dramatically improved runtime performance. The lack of similar advancements in Ruby led to calls for catching up with modern performance expectations, especially as applications became more demanding.
Side note: To understand more about how a JIT works under the hood, TenderJIT, a pure-Ruby JIT written by Aaron Patterson, does a good job at explaining the basic concept, using a very simple
add
method example.
How the JIT Compiler Addresses These Problems
The JIT compiler works by dynamically converting frequently executed bytecode into native machine code at runtime. This process reduces the interpretation overhead and enables:
- Execution Speed: Native code executes faster than interpreted bytecode.
- Method Inlining and Hot Path Optimization: The JIT compiler can identify and optimize “hot paths” (frequently executed code paths), making method calls and loops faster.
- Hardware-Specific Optimizations: Native code generation can take advantage of specific CPU features for better performance.
- Improved Single-Threaded Performance: By optimizing the execution of single threads, JIT mitigates some of the limitations imposed by the GVL.
It’s worth noting that JIT compilers in Ruby are not a panacea. Generating native code dynamically increases memory usage, and Ruby’s high dynamism, such as its flexible typing and runtime object modifications, limits the extent of optimization achievable compared to statically-typed languages.
Let's now explore some advanced JIT compilers (highlighting their key features, strengths, and limitations), starting with TruffleRuby.
TruffleRuby
TruffleRuby is an implementation of Ruby built on top of GraalVM, a high-performance runtime that supports multiple languages. TruffleRuby uses the GraalVM’s JIT compiler and Truffle framework to achieve significant performance improvements, particularly for computationally intensive Ruby programs. It comes in two distributions:
- Native Standalone: This only contains TruffleRuby in the Native configuration.
- JVM Standalone: Includes TruffleRuby in the JVM configuration, with support for other languages such as Java, JavaScript, Python, and WebAssembly.
Using each of them comes with its own trade-offs in terms of time to start, time to reach peak performance, etc. check out this detailed comparison for more information.
Key Features
The key features of TruffleRuby make it a powerful tool and include:
- GraalVM JIT Integration: TruffleRuby can run idiomatic Ruby code faster through the GraalVM JIT Compiler, especially for many CPU-intensive tasks.
- Parallel Execution: TruffleRuby does not have a global interpreter lock and so can run Ruby code in parallel.
- Polyglot Support: It allows Ruby to interact with other GraalVM-supported languages, like Java, JavaScript, Python, and WebAssembly, enabling mixed-language applications with low overhead.
- Support for C Extensions: TruffleRuby supports most C extensions, including database drivers.
- Multi-language Tooling: TruffleRuby provides tooling, such as debuggers and monitoring, that work across languages. It supports VisualVM and has the CPU Tracer, CPU Sampler, and Coverage tools for profiling. For debugging, it has VSCode support, Chrome Inspector, and NetBeans support.
Limitations
TruffleRuby runs Rails and is compatible with many gems, including C extensions. It also passes around 97% of the ruby/spec test suite. According to this research paper:
Our system is able to run existing almost unmodified C extensions for Ruby written by companies and used today in production. Our evaluation shows that it outperforms MRI running the same C extensions compiled to native code by a factor of over 3.
Nevertheless, TruffleRuby is not 100% compatible with MRI. It tries to match the behaviour of MRI as much as possible, but in a few limited cases, it is deliberately incompatible with MRI in order to provide a greater capability. Check out this detailed list of TruffleRuby's unsupported features and libraries.
How to Use TruffleRuby
TruffleRuby can be installed via several Ruby managers/installers. In some cases, you might encounter the error ERROR openssl@1.1 from Homebrew is required, run 'brew install openssl@1.1'
because openssl@1.1
was disabled on 2024-10-24. This happens because TruffleRuby depends on OpenSSL 1.1 for compatibility, and Homebrew no longer supports it. Refer to the installer's documentation for a solution. For example, if using rvm, running rvm get master
and repeating the installation should resolve the issue.
- Install TruffleRuby:
rvm install truffleruby
- Verify the Installation:
rvm list
You should see truffleruby-24.1.1 listed alongside your other available Ruby versions.
- Switch to TruffleRuby:
rvm use truffleruby
- Verify the current version:
which ruby # or ruby --version
TruffleRuby should be the output.
- Test Polyglot Capabilities:
TruffleRuby's polyglot capabilities allow for seamless integration with other languages like Java. For example, you can create and manipulate a Java array directly in Ruby:
# Run irb to open a terminal truffleruby-24.1.1 :001 > array = Java.type('int[]').new(4) => #<Polyglot::ForeignArray[Java] int[]:0x67e9fc75 [0, 0, 0, 0]> truffleruby-24.1.1 :002 > array[2] = 42 truffleruby-24.1.1 :003 > p array[2] 42 => 42 truffleruby-24.1.1 :004 > array => #<Polyglot::ForeignArray[Java] int[]:0x67e9fc75 [0, 0, 42, 0]>
If we attempt to input a string into the array, we get a TypeError
because we initialized it as an array of integers using the Java type system.
truffleruby-24.1.1 :006 > array[1] = "s" <internal:core> core/truffle/polyglot.rb:221:in `write_array_element': Cannot convert '"s"'(language: Ruby, type: String) to Java type 'int': Invalid or lossy primitive coercion. (TypeError)
For those who have long wished for a type system in Ruby, TruffleRuby’s ability to leverage one through other languages does sound like great news.
TruffleRuby includes built-in support for interop with llvm
and host Java
, available without any additional setup. For developers seeking to leverage other GraalVM-supported languages like JavaScript or Python, the JVM Standalone version of TruffleRuby is required, as the Native Standalone version does not support installing additional languages.
To install additional languages, you can use the truffleruby-polyglot-get $LANGUAGE
command. For example, to install JavaScript support, run truffleruby-polyglot-get js
.
Now let's move on to see what JRuby has to offer.
JRuby
JRuby is an implementation of Ruby on the Java Virtual Machine (JVM). It aims for high compatibility with CRuby so Ruby developers can run existing code with minimal changes while taking advantage of JVM performance, tooling, and deployment options. The name "JRuby" stems from "Just Ruby".
Key Features
The key features of JRuby include:
- Thread-level Parallelism: JRuby maps Ruby threads to Java threads, which are usually mapped directly to native threads. This means a simple Ruby
Thread.new { }
produces a real OS thread that runs concurrently with the parent thread. With JRuby, we get concurrency without a global-interpreter-lock. Because JRuby is on the JVM, you also have access to all that the JVM offers in terms of concurrency, like threadsafe collections, queues, sets, and other data structures. - Java Integration: Because JRuby is tightly integrated with Java, Java classes can be used in your Ruby program, as well as JRuby embedded into a Java application.
- High Performance: JRuby has a JIT compiler that runs when executing interpreted code, so it is able to offer high performance.
- Lesser Memory Usage: JRuby can process a higher number of requests per second per MB. This makes it a good choice for web applications that scale to a certain size and need to save hosting costs.
Limitations
While JRuby aims for compatibility with MRI, certain differences exist due to its JVM-based nature. Some of these differences may be perceived as limitations, particularly for those requiring exact MRI behavior. A few include:
- No
fork()
Support: JRuby doesn't implementfork()
on any platform, including those wherefork()
is available in MRI. This is due to the fact that most JVMs cannot be safely forked. - No Continuations: JRuby does not support continuations, as the JVM lacks built-in support for capturing execution state in this way. While JRuby does support fibers (a form of delimited continuation), each fiber is backed by a native thread. This design means that fiber-heavy applications may run into resource constraints, as they are limited by the system’s thread capacity.
To identify more differences between MRI and JRuby and possible areas of limitation, check out JRuby's documentation.
How to Use JRuby
To run JRuby, you will need JRE (the Java VM runtime environment) version 8 or higher.
- Install JRuby:
rvm install jruby
- Verify the Installation:
rvm list
You should see jruby-9.4.9.0 listed alongside your other available Ruby versions.
- Switch to JRuby:
rvm use jruby
- Verify the current version:
which ruby
This should output a JRuby version.
- Alternatively, run
jruby -v
to confirm that we can run scripts using thejruby
command. If this results in an error like:
The operation couldn’t be completed. Unable to locate a Java Runtime. Please visit http://www.java.com for information on installing Java.
It indicates that Java is either missing or incorrectly configured. To resolve this, take the following steps:
brew install openjdk # Install Java # Ensure the JDK is properly linked sudo ln -sfn $(brew --prefix openjdk)/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk # Set the JAVA_HOME Environment Variable export JAVA_HOME=$(/usr/libexec/java_home)
Running jruby -v
now should produce a result similar to:
jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 23.0.2 on 23.0.2 +jit [arm64-darwin]
- Test Native Thread Support:
Let's create a JRuby test script called jruby_test.rb
.
require 'benchmark' # Demonstrating JRuby's native thread support puts "\nStarting multi-threaded computation..." def expensive_operation(id) sum = 0 5_000_000.times { sum += rand } puts "Thread #{id} finished computation" end multi_thread_time = Benchmark.measure do threads = 4.times.map { |i| Thread.new { expensive_operation(i) } } threads.each(&:join) end puts "Total time for multi-threaded computation: #{multi_thread_time.real.round(4)} seconds" puts "All threads completed!"
If we run this script via jruby jruby_test.rb
, the output is as follows:
Starting multi-threaded computation... Thread 2 finished computation Thread 1 finished computation Thread 3 finished computation Thread 0 finished computation Total time for multi-threaded computation: 1.1317 seconds All threads completed!
Threads work as expected in JRuby, and so do a lot of other MRI features.
Performance Benchmarks Using Optcarrot
Optcarrot is a NES emulator for Ruby Benchmark. It displays its results in frames per second (fps). With the --benchmark
option, Optcarrot works in the headless mode (i.e., no GUI), runs a ROM in the first 180 frames, and prints the FPS of the last ten frames. Higher FPS means better performance.
To run a benchmark using Optcarrot, use the following steps:
$ git clone http://github.com/mame/optcarrot.git $ cd optcarrot $ /path/to/ruby bin/optcarrot --benchmark examples/Lan_Master.nes
Let's run all benchmarks on the following system for consistency:
Test Machine Specs
- Model: MacBook Air (M3, 2023)
- Chip: Apple M3 (8-core CPU, 10-core GPU)
- Memory: 16GB Unified RAM
- OS: macOS Sonoma 14.6.1
The results (as output by Optcarrot for the dev versions of TruffleRuby Native and JVM, Ruby 3.4.1 with and without YJIT, and JRuby) are as follows:
-
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin23] fps: 64.45375447170815 checksum: 59662
-
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23] fps: 259.74026114768935 checksum: 59662
-
truffleruby 25.0.0-dev-c229bfd1, like ruby 3.3.5, Oracle GraalVM JVM [arm64-darwin20] fps: 324.43734547614775 checksum: 59662
-
truffleruby 25.0.0-dev-c229bfd1, like ruby 3.3.5, GraalVM CE Native [arm64-darwin23] fps: 230.9387340234719 checksum: 59662
-
jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 23.0.2 on 23.0.2 +jit [arm64-darwin] fps: 68.48935044859236 checksum: 59662
-
jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 23.0.2 on 23.0.2 +indy +jit [arm64-darwin] fps: 115.26756260357814 checksum: 59662
Performance Comparison Table
The summary of the above result is as follows:

TruffleRuby on GraalVM (JVM mode) remains the best-performing Ruby implementation for this benchmark, with MRI + YJIT following closely behind. Initially, JRuby + Indy + JIT performed at 62.53 FPS
, which was slower than MRI, but after multiple runs, its performance stabilized at 115.27 FPS
. While this is still significantly slower than TruffleRuby and YJIT, it's important to note that JVM-based optimizations excel in long-running applications, where JRuby’s JIT has more time to optimize execution.
Impact of Advanced JIT Compilers on the Ruby Ecosystem
YJIT has made significant strides in improving the performance of MRI, keeping it competitive among Ruby implementations. The presence of JRuby and TruffleRuby means that developers can choose an implementation based on their application's needs — whether it’s JVM integration, native execution speed, or compatibility with the broader Ruby ecosystem.
This diversity ensures that performance concerns no longer force developers to switch to a less enjoyable language. Instead, Ruby remains a viable choice across different workloads.
In 2023, Aaron Patterson demonstrated that YJIT-enabled Ruby could outperform C extensions in some cases. This reinforces the idea that advanced JIT compilers are not just making Ruby faster, they are redefining its competitive edge in the developer ecosystem.
In May 2025, a new JIT compiler called ZJIT was introduced with the goal of building a more accessible, community-friendly compiler architecture for Ruby. The team describes several architectural choices that set it apart from YJIT:
Instead of compiling YARV bytecode directly to the low-level IR (LIR), it uses a high-level SSA-based intermediate representation (HIR).
Instead of compiling one basic block at a time, it compiles one entire method at a time.
Instead of using lazy basic block versioning (LBBV) to profile types, it reads historical type information from the profiled interpreter.
Instead of doing optimizations while lowering YARV to LIR, it has a high-level modular optimizer that works on HIR.
Source: ZJIT has been merged into Ruby
As they note, there are important tradeoffs:
While YJIT’s architecture allows for easy interprocedural type-based specialization, ZJIT’s architecture gives more code at once to the optimizer.
Source: ZJIT has been merged into Ruby
While ZJIT is still in its early stages and has not attained peak performance yet, its introduction does mark a meaningful shift towards an architecture designed for more experimentation and collaboration. This opens the door for a broader group of developers, beyond compiler veterans, to shape Ruby’s performance future. It’s another signal that Ruby isn’t just keeping up, it’s constantly evolving to remain competitive while preserving developer happiness.
Wrapping Up
In this article, we talked about two advanced JIT compilers: TruffleRuby and JRuby. We explored their key features and carried out performance benchmarks using Optcarrot for each of them (along with Ruby YJIT), to determine which is the fastest.
We also touched on the ZJIT compiler, which is in its infancy but shows promise as a next generation JIT compiler.
Happy coding!
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Subscribe to our Ruby Magic newsletter and never miss an article again.
- Start monitoring your Ruby app with AppSignal.
- Share this article on social media
Most popular Ruby articles
What's New in Ruby on Rails 8
Let's explore everything that Rails 8 has to offer.
See moreMeasuring the Impact of Feature Flags in Ruby on Rails with AppSignal
We'll set up feature flags in a Solidus storefront using Flipper and AppSignal's custom metrics.
See moreFive Things to Avoid in Ruby
We'll dive into five common Ruby mistakes and see how we can combat them.
See more

Abiodun Olowode
Our guest author Abiodun is a software engineer who works with Ruby/Rails and React. She is passionate about sharing knowledge via writing/speaking and spends her free time singing, binge-watching movies, and watching football games.
All articles by Abiodun OlowodeBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!
