How to verify complex designs based on RISC-V

0

// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>

As the development of the RISC-V processor matures and the use of the core in SoCs and microcontrollers increases, engineering teams face new verification challenges related not to the RISC-V core itself. but rather to the system based on or around it. Naturally, the verification is just as complex and time consuming as for, say, an Arm processor based project.

To date, industry verification efforts have focused on ISA compliance in order to standardize the RISC-V core. Now the question seems to be: How do we handle verification as the system grows?

Obviously, the challenge is evolving with multiple cores and the addition of out-of-the-box peripherals and custom hardware modules.

Here we can see two verification challenges. First, we need to make sure the kernel is correct and ISA compliant, and second, we need to test the system using the kernel. Either way, transaction-level hardware emulation is the perfect choice, especially if the emulation is based on the Accellera SCE-MI standard, which allows reuse across different platforms and vendors. Combined with automatic design partitioning and extensive debugging capabilities, this provides a comprehensive verification platform.

When the processor core becomes more powerful and brings more functionality, simulation of the register transfer level (RTL) is no longer enough. It also does not provide full test coverage within a reasonable timeframe. With emulation, the test speed is much higher (in MHz), which, combined with the cycle accuracy, allows us to increase the length and complexity of tests (which run quickly).

When using emulation, the core itself can be automatically compared to the RISC-V ISS gold model to confirm its accuracy and that it meets ISA compliance requirements. Figure 1 shows a RISC-V processor under test.

Figure 1. A RISC-V processor under test. It is implemented in the emulator while RISC-V ISS is part of an advanced UVM test bench.

The test bench used during simulation can be reused for emulation, so it is worth making sure that the test bench is “ready for emulation” even at the simulation stage. This will allow a smooth switch between the simulator and the emulator without developing a new test bed.

This strategy will also pay off in the case of adding custom instructions to RISC-V (instructions intended to speed up algorithms in design) because, with hardware emulation, it is possible to test and compare these instructions to algorithms developed faster than in a pure simulation environment.

After the processor or CPU subsystem has been verified, we can move on to checking the whole system. Fortunately, the same technique can be used for checking other SoC hardware, custom hardware, and peripherals. All of them can be implemented in the emulator and verified with the same Universal Verification Methodology (UVM) or SystemC benchmark used in the simulation. See figure 2.

Figure 2. All SoC elements, including the RISC-V core, can be implemented in an emulator and verified with the same UVM or SystemC test bench used during simulation.

Such a methodology allows long test sequences (randomly constrained UVMs, for example) to build complicated test cases and accelerated SoC architecture benchmarking simulations in order to optimize the hardware structure and components.

We must remember, however, that SoC projects these days require not only hardware development, but complex, multi-layered software code as well. This means that software and hardware engineering teams are working on the same project with complex verification requirements and great challenges on the software-to-hardware interface. Software teams typically begin development in isolation, using ISS software or platforms / virtual machines, which is usually sufficient when there is no need to interact with new hardware.

When the system grows with custom devices and modules, the software must support not only the RISC-V and its close surroundings (which can also be modeled in the software), but also the rest of the hardware modules by providing operating system drivers, API, or high-level applications.

How do you make sure that these two worlds can work together and be in sync while developing and testing the whole project?

The solution is, again, transaction-level emulation. When using a hardware emulator, we can test all RTL modules at a higher speed with flexible debugging functions, but there is even more: an emulator host interface API (usually C / C ++ based) allows us to connect the virtual platform used by the software team to create an integrated verification environment for the software and hardware areas of the project. See figure 3.

Figure 3. A “co-emulation” environment created using an SCE-MI macro-based emulator API and TLM interface in a virtual platform.

Now we can run the whole system at speeds of MHz, which shortens the boot time of an operating system, for example, from a few hours to a few minutes and allows parallel debugging of the processor and sub – hardware systems.

The advantage of a hybrid co-emulation platform is that software engineers don’t have to migrate to a completely different environment when the RTL code in the design matures. Their primary development vehicle is still the same virtual platform but, thanks to co-emulation, it now represents the entire SoC, including custom hardware. This way, the software and hardware teams can work on the same revision of the project and verify the accuracy and performance of the design without expecting each other.

What about FPGA prototyping, you might ask, why not just do it? The answer is quite simple, prototyping requires all RTL source code to be ready and synthesized into FPGA for all elements of the system which is time consuming so the software team has to work on virtual models only.

Even when the entire RTL design is ready, providing prototyping hardware to all software developers can be quite expensive. Therefore, using the co-emulation approach allows us not only to check the entire system and uncover potential issues much earlier in the project development cycle, but also to optimize the cost of verification tools.

Additionally, with more comprehensive hardware debugging tools in emulation, any flaws or bugs in RTL code can be easily diagnosed (yet another benefit of early hardware-software co-checking),

without returning to the simulation. Once done, FPGA prototyping can certainly be extremely useful for the final high speed testing.

About the Author

– Zibi Zalewski is Managing Director of Aldec’s Hardware division.


Source link

Share.

About Author

Leave A Reply