caravel/verilog/dv/cocotb
M0stafaRady 745b3e3e2c cocotb - skip write and read from xfer
because the write reg is diffrent than the read one
2022-10-25 08:00:51 -07:00
..
doc cocotb - update documentation 2022-10-20 07:56:19 -07:00
interfaces cooctb- update gpio tests for caravan (#341) 2022-10-22 12:14:25 -07:00
tests cocotb - skip write and read from xfer 2022-10-25 08:00:51 -07:00
wb_models/housekeepingWB fix some timeout and errors due to cpu became slower and sram interface are deleted 2022-10-11 14:00:49 -07:00
.gitignore cocotb - fix include of vcs rtl code to enable the coverage for RTL 2022-10-23 08:13:17 -07:00
README.md cocotb - update documentation 2022-10-20 07:56:19 -07:00
caravel.py move caravel.py, cpu.py ... to interfaces directory 2022-10-10 04:50:45 -07:00
caravel_tests.py cocotb - Adding cpu_reset test 2022-10-24 06:39:34 -07:00
caravel_top.sv cocotb - fix include of vcs rtl code to enable the coverage for RTL 2022-10-23 08:13:17 -07:00
cpu.py move caravel.py, cpu.py ... to interfaces directory 2022-10-10 04:50:45 -07:00
pli.tab Adding cocotb evironment with tests and scripts to run 2022-09-30 03:52:34 -07:00
sdf_includes.v cocotb add gl and sdf regression for caravan 2022-10-24 08:27:27 -07:00
tests.json cocotb - remove pll and clock_redirect tests from caravan regression 2022-10-24 13:16:38 -07:00
verify_cocotb.py merge with main 2022-10-24 07:51:16 -07:00

README.md

Overview

Cocotb environment (CTN) is a dynamic simulation testing environment. It's purpose is to speed testing simulation time and get coverage data. The environment is developed using cocotb, an open source coroutine-based co simulation testbench environment for verifying VHDL and SystemVerilog RTL using Python. CTN has 2 main layers: tests and whitebox models. Tests layer contain multiple tests and sequences that can communicate with the caravel (dut) through drivers shown in read at fig 1. Whitebox models layer contain multiple models that should mimic the behavior of each main block inside caravel see fig1. Model is supposed to check if the model is working as expected, if its registers contain the expected values all the time and report coverage of features provided by this block if its tested or not.

Alt text

fig1. caravel testbench environment (read lines are drivers )

Prerequisites

run a test

Use script verify_cocotb.py

  -h, --help            show this help message and exit
  -regression REGRESSION, -r REGRESSION
                        name of regression can found in tests.json

  -test TEST [TEST ...], -t TEST [TEST ...]
                        name of test if no --sim provided RTL will be run
                        <takes list as input>

  -sim SIM [SIM ...]    Simulation type to be run RTL,GL&GL_SDF provided only
                        when run -test <takes list as input>

  -testlist TESTLIST, -tl TESTLIST
                        path of testlist to be run

  -tag TAG              provide tag of the run default would be regression
                        name and if no regression is provided would be
                        run_<random float>_<timestamp>_

  -maxerr MAXERR        max number of errors for every test before simulation
                        breaks default = 3

  -vcs, -v              use vcs as compiler if not used iverilog would be used

  -cov                  enable code coverage

  -corner CORNER [CORNER ...], -c CORNER [CORNER ...]
                        Corner type in case of GL_SDF run has to be provided

  -keep_pass_unzip      Normally the waves and logs of passed tests would be
                        zipped. Using this option they wouldn't be zipped
                        

Refer to examples

Tests

Refer to tests doc for tests list

cocotb directory tree

├── caravel.py -> contains driving and mentoring functions for caravel interface
├── caravel_top.sv -> testbench top level 
├── cpu.py -> contains driving and mentoring functions for wishbone when disable the cpu 
├── hex_files -> folder that contains hex files 
├── verify_cocotb.py -> script that run tests and regressions 
├── sim ->  directory get generate when run a test
│   └── <tag> -> tag of the run  
│       ├── <sim type>-<test name> -> test result directory contain all logs and wave related to the test
│       ├── command.log -> command use for this run 
│       └── runs.log -> contains status of the run fails and passes tests 
├── tests -> directory contains all the tests 
├── tests.json -> test list have all the tests, regressions and contain small description about every test 
└── wb_models -> contains checkers and models for some caravel blocks 

How to debug

TO BE ADDED