Architecture & Technology
As described in the product introduction, there are two major pieces to the product: the code generator and the runtime library. The code generator does all the heavy work that allows your developers to write C++ code that internally delegates to Java. The runtime library is responsible for actually making your application work when you try to run it.
From an end user's perspective, the runtime library is the much more exciting part because it contains the magic that ties all pieces together in the finished application. Let's take a look at the runtime architecture.
The above picture illustrates two alternatives for the regular flow of control at runtime. On the right hand side you have your own, handwritten C++ code. From it you use the JunC++ion-generated C++ proxy classes that are wrappers around the Java types that you wish to use. The generated code is only a thin veneer over the runtime library, to which it delegates as quickly as possible after doing a little bit of call argument preprocessing.
The runtime library is written in C and C++ and does all the interesting work. It has two alternative "backend" implementations:
- The first implementation uses the Java Native Interface (JNI) to load a Java Virtual Machine (JVM) into the C++ process. In this mode (the default mode) there is no other process involved because all Java activities are executed in a JVM loaded into your C++ process.
This is the integration solution with the best possible performance. Please also take a look at this page, which goes into more detail about JNI.
- The second implementation uses a proprietary TCP/IP protocol to connect to a Shared JVM server. The Shared JVM server is a separate, pure Java process that is running somewhere on your network. It hosts your Java classes and every C++/Java interaction involves an interprocess call. Think of the Shared JVM server as a lightweight application server that can make any plain old Java object (POJO) available as a remote object.
This integration solution has a severe performance penalty when compared to the JNI-based in-process integration.
Please note that both backend imlpementations use the same runtime library, the same generated proxy classes and the same user-written code. The only difference between the two modes lies in some runtime configuration settings.
Regardless of the mode that you're using, you can control when the JVM is loaded or a connection to a Shared JVM is made. If you take no explicit steps, the first use of a proxy type that requires delegation to Java will on-demand-load the JVM or connect to the server. Alternatively, you can explicitly load/connect to a JVM by using the configuration API in your application.
Once a JVM has been loaded/connected to, it will typically remain active until your process terminates. This is an important thing to remember: you cannot have multiple JVM load/unload cycles! This is necessitated by the JVM implementation which does not support this behavior and is not a limitation of the JunC++ion runtime.
Why use a Shared JVM?
If the JNI-based alternative is so superior in performance, why would we even implement the out-of-process Shared JVM alternative? The answer is simple: it's yet another deployment option for you. If any of the following statements are true, you might be interested in the Shared JVM option:
- You have to run many JunC++ion-enabled processes concurrently.
In in-process mode, this would mean N times the JVM overhead, which might be unacceptable due to machine horse-power constraints.
- Your C++ process cannot tolerate the overhead (virtual memory allocation) imposed by an in-process JVM.
Even if you configure a JVM to use a small heap, many JVMs allocate a lot of virtual address space. This may be prohibitive for some applications.
- Java is prohibited on the deployment machine.
- Your C++ process runs on a machine on which you don't have access to a JVM.
We already touched a little on the runtime configuration API when we mentioned that Shared JVM mode is a runtime configuration option. The JunC++ion runtime has a very sophisticated runtime configuration framework providing you with complete control over all runtime aspects of the mixed language application.
You can of course configure all options explicitly in your application code. The following snippet illustrates this approach:
xmog_jvm_loader & loader = xmog_jvm_loader::get_jvm_loader(); loader.setClassPath( "../lib/myapp.jar" ); loader.setLibraryPath( "." ); loader.setMaximumHeapSizeInMB( 256 );
Without a lot of detailed explanation you can easily see that you acquire a reference to the JVM loader object and then use the configuration API to specify Java settings. The configuration API offers many more configuration options through easy-to-use wrapper methods.
XML file-based Configuration
Alternatively, you could use a configuration file and a one-liner in your application:
xmog_jvm_loader::setConfigFile( "./myapp.exe.config" );
The corresponding config file could contain the following data:
<Loader name="Default" />
<add key="ClassPath" value="../lib/myapp.jar" />
<add key="LibraryPath" value="." />
<add key="MaximumHeapSizeInMB" value="256" />
The element schema is based on .NET's config file schema. If you add some .NET-specific element declarations you can use the same config file for both your JunC++ion and JuggerNET-enabled applications.
The most sophisticated configuration option allows you to register a callback that will be invoked by the JunC++ion runtime at various times in the initialization sequence. This approach allows you to build integration modules that are essentially self-configuring! You can add such a configuration hook to a shared library of proxy types and the hook will (at different times):
- configure the runtime default settings specifically for the contained proxy classes
- check and/or override user-specified configuration settings before attempting to load a JVM
- check for correct configuration after loading the JVM