The main APIs are:
OP for creating and controlling platforms, Open Platforms
VMI for creating processors , Virtual Machine Interface
BHM for behavioral modeling, Behavioral Hardware Modeling
PPM for peripherals, Peripheral Programming Model
[There was also ICM for creating platforms, but this was replaced by OP in 2015/16]
The main OVP download package provides the headers and documentation/reference material for all the APIs. There is also extensive training material and a fully worked through processor model. The API download package includes user guides and other documentation. The APIs documentation provides a reference using the Doxygen technology that processes the API header directly and creates the reference documentation automatically from them (and thus ensuring the reference documentation is always up to date and correct by construction).
The best way to get a feel for the APIs is to download some of the examples or demos and browse. The main OVP package includes many worked examples for each of thed APIs.
OP - for creating and controlling platforms
The OP API is a C API used to create the hierarchical structure of platforms and the interconnection of their components. OP is also used to control these platforms/modules with a test bench or test harness.
OP allows the creation and instantiation of modules. A module is a separately compilable component that instances other sub-modules and components such as multiple processors, buses, nets, memories and peripherals. Using buses and nets, memories and processors can be interconnected in arbitrary topologies. It enables arbitrary multiprocessor shared memory configurations, and heterogeneous multiprocessor platforms. There are also OP calls to load application programs into simulated memories and to simulate the platform.
A module is compiled from C to a shared object. This shared object is then instanced in a test harness, or simulated directly with the provided harness.exe program.
To create an executable of a platform write a C program with a main() in it (this is the test harness) that uses the OP functions to instance all of the sub modules and other components in your system. OP also has functions to control the simulation. Then compile up using a native C compiler. OVP recommends using MinGW (www.mingw.org) and MSYS.
On the OVP web site there are presentations and videos available demonstrating OP usage. There are many examples in the Examples/PlatformConstruction and Examples/SimulationControl directories of the main OVP download package.
OP is not targeted at cycle-accurate simulation - use HDL or SystemC if you need this. OVPsim and OP are targeted at system model and simulation to enable embedded software development. If you need more timing accuracy from OVP, cycle approximate etc, then please contact a commercial vendor such as Imperas.
SystemC models can not be encapsulated in OVPsim, but it is straightforward to encapsulate OVPsim models in SystemC or TLM2. This encapsulation is via the OP functions. Performance of OVPsim encapsulated in a third party environment may be greatly restricted by that third party environment. If you are using SystemC then we recommend using OVP models using the TLM2 interface. All OVP models, including processors and peripherals can be wrapped for use with SystemC TLM2. Many already come with native SystemC interfaces.
If you need to use OVPsim to simulate your platforms and you have an XML or SPIRIT (IP-XACT) XML netlist - then you need to convert this to a C program using OP calls. Imperas has commercial tools to accomplish this and also to enable the writing of platforms very efficiently in TCL. Please contact Imperas (info[at]imperas.com) for more information.
VMI - for creating processors
VMI is a C API enabling processor model behavior to be described to the OVPsim simulator.
Using the VMI, you describe the behavior of each processor instruction using calls that map onto the Just-In-Time Code Morphing compiler primitives for superfast simulation speeds.
Complex features such as MMUs, TLBs, processor operating modes (such as privileged instructions) can all be modeled so efficiently that execution speeds of hundreds of millions of simulated instructions per second are normal.
The VMI interface also allows elegant specification of external semi-hosting libraries (allowing services such as file I/O to be provided by the native host) and extension libraries (so that customer-specific instruction set extensions can be seamlessly integrated).
The main components of a VMI processor model are:
Instruction Decode, Instruction Behavior, disassembly, L1 cache, exceptions, TLB, asynchronous events, and debug interface. Around a model there is also a need to model L2 cache, any shared resources (eg in an SMP processor), and any extensions, processor independent I/O support etc.
Typically a 32 bit RISC can be written in 6-8 weeks and will run at up to 500 MIPS on a 3GHz desktop PC.
It is possible to encapsulate existing ISS models within OVPsim, provided that they export some basic features (for example, the existing ISS model should be available as a shared object, provide an API to allow it to be run instruction-by-instruction or for a number of instructions, and provide an API allowing memory to be modeled externally). There are API functions in the VMI to enable this ISS encapsulation. Of course, performance may be greatly restricted by the encapsulated ISS speed.
OVP recommends using MinGW (www.mingw.org) and MSYS.
There are full reference and user guide documents provided in the OVP API download package. There is also a fully worked OR1K model available as source and this is used as the basis of the VMI training material. This fully worked example is available for download.
If you are needing help in building a VMI model - then please contact us (info[at]ovpworld.org) or please contact Imperas (info[at]imperas.com) to understand the services that can be provided. Imperas has developed custom models in the past for customers.
A model created using the VMI API can be used by OVPsim and the Imperas commercial simulation tools without modification.
Using the VMI, OVPsim provides semi-hosting capabilities of the VMI that allow intrusive application instrumentation. Intrusive instrumentation means the application behavior may well be affected by the "intrusion". A licensed product is available from Imperas that provides non-intrusive instrumentation and semi-hosting.
Using the VMI, OVPsim can implement arbitrary multiprocessor systems. The performance of heterogeneous platforms can be accelerated by upgrading to the Imperas commercial simulation tools.
It is easy to implement models of processors with 64-bit addressing on the 32-bit Windows XP host.
Any instruction behavior can be implemented using the VMI - there is no restriction. Not all instructions map closely to the Just-In-Time code morphing opcode set: those that do not can be implemented using function calls from Just-In-Time morphed code at run-time.
The VMI interface implements most opcode in 8, 16, 32 and 64-bit widths; some are implemented to 128 bits.
The VMI API allows for the conditional execution of sections of the behavioral description of an instruction dependent upon actual values held in registers when running.
The VMI has very sophisticated modeling of delay slots and related instruction behavior.
Only x86 Windows and Linux native hosts are supported for the VMI currently. OVPsim is currently available only on Windows. A commercial variant for Linux is also available from Imperas.
The VMI interface allows implementation of arbitrary MMU/TLB units. The implementation is so efficient that there is no impact on simulation speed in the absence of TLB misses.
The VMI natively supports a range of floating point operations, and the set of available operations is being continuously enhanced.
The VMI can be used for both RISC and CISC, any instruction format can be supported.
The VMI interface also allows elegant specification of external semi-hosting libraries (allowing services such as file I/O to be provided by the native host).
PPM/BHM - for creating behavioral models / peripherals
PPM & BHM are modeling APIs. They are used to write behavioral models of hardware/software systems which are peripheral to the processors in the platform being developed.
The models run in a protected environment (cannot crash the simulator) and run on the Peripheral Simulation Engines (PSEs).
The difference between PPM and BHM is:
BHM - Behavioral modeling
processes, delays, events.
Start processes (like the Verilog/VHDL always/process block)
Wait for event or time.
Diagnostic text output.
PPM - Peripheral modeling
interface to the platform
bus ports & net ports.
Connect to bus, connect to net,
bridge between address peripheral and real address spaces.
Models written in PPM & BHM are C and need to be compiled using the OVP supplied PSE tool-chain (this is a version of gcc, as, ld, nm, etc).
The documentation for these APIs and on modeling behavioral components is available in a download package.
The BHM/PPM has similar concepts to SystemC, but each instance of each model exists in its own private address space. Models are restricted to communicate through the published API (no anarchy). There is no need to allocate space for each model, or to maintain a "this" pointer.
Models running on a PSE have very limited access to the host computer -
Open, close read write and a few others. However, PSE functions can be intercepted in the same way as application code. The intercept library can use system functions to communicate with the outside world.
It is normally pretty easy and simple to wrap existing C functions in a BHM/PPM peripheral model.
If you are modeling from scratch, there are tools available from Imperas (info[at]imperas.com) to create the outline of memory-mapped bus interfaces, net connections and the model start-up code from a simple XML format or TCL script into a C template file. This supports the current trend towards convergence of all design data into one database.
ICM - for creating platforms (Superceded by OP)
The ICM is a C API that was used to create platform netlist designs/systems for use with OVPsim. In 2015/16 it was superceded by the OP API that allowed the specification and use of hierarchical module components and also allowed finer control over simulation.