The most common practise today for developing embedded software is to start to develop initial software in a desktop Windows or Linux development environment and start unit testing - with the software running in the general purpose Operating System. This development environment is often very different to the final target system - for example using host threads as opposed to separate processors in the real system - requiring much re-writing/modification/porting for final deployment.
When a prototype of the embedded system or chip is available the software is ported to this target environment using cross compilers and related tools targeting the embedded processors, such as ARM, MIPS, PowerPC, SH et al. FPGAs might be used in the prototype to emulate the SoC. A simple debugger is then often connected up via a JTAG port.
There are many challenges when using this traditional approach. If you use a hardware prototype of your system it is often unreliable, not readily available within all your software developments sites(especially those off-shore), it can be physically unreliable, and worst of all, it is often available only very near to the end of the targeted product development schedule. All these challenges contribute to real problems in getting software available soon after product hardware availability - the target should be to get the products embedded software up and running very shortly after hardware availability.
These challenges become acute as more processors interact in the embedded system. Then there are new challenges in multi-core or multi-processor systems where often the hardware prototypes provide limited controllability, observability, and debugability. When tracking down complex multi-processor issues, the bugs are often very hard to reproduce reliably and isolate in complex real-time hardware. There are just not the facilities to comprehend all the activity, and race conditions are possible and potentially disastrous.
As a result development teams are scrabbling about looking for a better solution. As more and more chips become multi-core, these teams are looking for a better solution than just awaiting the prototypes... they just cannot afford to be that late to market.
If there was a virtual model of the hardware platform that was available to the software developers at the very earliest stage of the products development and if the initial testing of software is done on a virtual platform, then they could reduce SoC schedules by months, and reduce initial development and maintenance costs significantly for SoC embedded software.
This is what OVP is enabling: the availability of freely available Virtual Platform models early in the product development cycle.
Yes, this methodology of having a model of the system to be used for software development is more critical for an SoC or MPSoC where there is Software on Chip or Multi-Processor Software on Chip, but it is also a benefit for developers of any embedded software. It is far easier to develop software in conjunction with a good simulation of a device than it is on the real embedded device.
OVP is targeting the building of models of embedded components to enable embedded software to be developed efficiently.
Hardware analogy: In the mid 1980s there was a challenge in the chip hardware design business: the chips were getting more and more complex, expensive to build, and taking longer to fabricate - and productivity was a significant challenge. By the end of 1980s most chips were developed on simulation technology and you would be hard pressed to find a chip that we sent for fabrication without significant testing using hardware design simulators like Tegas, HILO, and Verilog-XL.
This move from a "develop prototype" to a "run simulation model" based methodology dramatically improved hardware development productivity and enabled the hardware teams to harness complexity and manage exploding project schedules.
Those companies developing embedded software that have moved from a "develop hardware prototype" to a "use simulation model" (i.e. Software Virtual Platforms) methodology are dramatically improving their development schedules and start to manage exploding software complexity.
By the late 1980s it became unthinkable to fabricate a chip without verification using a simulation model (virtual prototype) of the hardware. In fact, with ASICs the chip fabricators mandated "sign-off" simulation, forcing the designers to run specific simulation runs before the chips are allowed to be fabricated.
Once a simulation methodology was established for chip design, new technologies in the verification space provided design teams and managers the capability to make quality measurements and project progress measurements enabling a much more controlled, predictable, and quality development process for hardware design. You can expect similar benefits as your software teams adopt more simulation and the use of Software Virtual Platforms.
In the future there will be many more tools in the software development teams arsenal to assist them in their challenge of developing embedded software for complex multi-processor/multi-core platforms. Software Virtual Platforms are the base and will enable the creation of these new tools and methodologies.
It will soon be unthinkable to develop embedded software without using a Software Virtual Platform. For Multi-Processor / Multi-Core systems it is essential today.