The --track-assumes option makes smtbmc keep track of which assumptions
were used by the solver when reaching an unsat case and to output that
set of assumptions. This is particularly useful to debug PREUNSAT
failures.
The --minimize-assumes option can be used in addition to --track-assumes
which will cause smtbmc to spend additional solving effort to produce a
minimal set of assumptions that are sufficient to cause the unsat
result.
Note that this functionality is only used by diagnostics emitted by
the C++ testbench; diagnostics emitted by the RTL in `eval()` do not
need to be recorded since they will be emitted again during replay.
Previously `unroll_stmt` would recurse over the smtlib expressions as
well as recursively follow not-yet-emitted definitions the current
expression depends on. While the depth of smtlib expressions generated
by yosys seems to be reasonably bounded, the dependency chain of
not-yet-emitted definitions can grow linearly with the size of the
design and linearly in the BMC depth.
This makes `unroll_stmt` use a `list` as stack, using python generators
and `recursion_helper` function to keep the overall code structure of
the previous recursive implementation.
Rename formal cells in addition to witness signals. This is required to
reliably track individual property states for the non-smtbmc flows.
Also removes a misplced `break` which resulted in only partial witness
renaming.
This commit adds a `debug_scopes` container, which can collect metadata
about scopes in a design. Currently the only scope is that of a module.
A module scope can be represented either by a module and cell pair, or
a `$scopeinfo` cell in a flattened netlist. The metadata produced by
the C++ API is identical between these two cases, so flattening remains
transparent to a netlist with CXXRTL.
The existing `debug_items` method is deprecated. This isn't strictly
necessary, but the user experience is better if the path is provided
as e.g. `"top "` (as some VCD viewers make it awkward to select topmost
anonymous scope), and the upgrade flow encourages that, which should
reduce frustration later.
While the new `debug_items` method could still be broken in the future
as the C++ API permits, this seems unlikely since the debug information
can now capture all common netlist aspects and includes several
extension points (via `debug_item`, `debug_scope` types).
Also, naming of scope paths was normalized to `path` or `top_path`,
as applicable.
Before this commit, `at()` and `operator[]` did the same thing, making
one of them redundant. There was also a somewhat awkward `parts_at()`,
which is more generic than `at()`.
After this commit, `parts_at()` becomes `at()`, and `at()` becomes
`operator[]`. Any quick-and-dirty accesses should use `items["name"]`,
which expects a single-part item. Generic code should instead use
`items.at("name")`, which will return multiple parts. Both will check
for the existence of the name.
This is unlikely to break downstream code since it's likely been using
the shorter `operator[]`. (In any case this API is not stable.)
Because all objects in C++ must have non-zero size, a `value<0>` has
a size of 4 despite consisting of a `uint32_t chunks[0]`. The debug
item assertions were not written expecting that and prevent any debug
items for such values from compiling.
The C API does not define exactly what happens for a zero-width debug
item, but it seems OK to say that they should refer to some unique
pointer that cannot be, in actuality, read or written. This allows
some techniques or optimizations that use `curr` pointers as keys and
assume they correspond 1-to-1 to simulation objects.
While SBY's aiger flow already removes non-assertion driving logic,
there are some uses of write_aiger outside of SBY that could end up with
$scopeinfo cells, so we explicitly ignore them.
The write_btor backend works differently and due to the way it
recursively visits cells, it would never reach isolated cells like
$scopeinfo.
These were previously not being converted correctly leading to yosys
internal cells being written to my netlist.
Signed-off-by: Ethan Mahintorabi <ethanmoon@google.com>
This is mostly useful for collecting coverage for the future `$check`
cell, where, depending on the flavor, formatting a message may not be
wanted even for a failed assertion.
The behavior of these format specifiers is highly specific to Verilog
(`$time` and `$realtime` are only defined relative to `$timescale`)
and may not fit other languages well, if at all. If they choose to use
it, it is now clear what they are opting into.
This commit also simplifies the CXXRTL code generation for these format
specifiers.
At the moment the only thing it allows is redirecting `$print` cell
output in a context-dependent manner. In the future, it will allow
customizing handling of `$check` cells (where the default is to abort),
of out-of-range memory accesses, and other runtime conditions with
effects.
This context object also allows a suitably written testbench to add
Verilog-compliant `$time`/`$realtime` handling, albeit it involves
the ceremony of defining a `performer` subclass. Most people will
want something like this to customize `$time`:
int64_t time = 0;
struct : public performer {
int64_t *time_ptr;
int64_t time() const override { return *time_ptr; }
} performer = { &time };
p_top.step(&performer);
This approach to tracking simulation time was a mistake that I did not
catch in review. It has several issues:
1. There is absolutely no requirement to call `step()`, as it is
a convenience function. In particular, `steps` will not be
incremented in submodules if `-noflatten` is used.
2. The semantics of `steps` does not match that of the Verilog `$time`
construct.
3. There is no way to make the semantics of `%t` match that of Verilog.
4. The `module` interface is intentionally very barebones. It is little
more than a container for three method pointers, `reset`, `eval`,
and `commit`. Adding ancillary data to it goes against this.
If similar functionality is introduced again it should probably be
a variable that is global per toplevel design using some object that is
unique for an entire hierarchy of modules, and ideally exposed via
the C API. For now, it is being removed (in this commit) and (in next
commit) the capability is being reintroduced through a context object
that can be specified for `eval()`.
This commit achieves three roughly equally important goals:
1. To bring the rendering code in kernel/fmt.cc and in cxxrtl.h as close
together as possible, with an ideal of only having the bigint library
as the difference between the render functions.
2. To make the treatment of `$time` and `$realtime` in CXXRTL closer to
the Verilog semantics, at least in the formatting code.
3. To change the code generator so that all of the `$print`-to-`string`
conversion code is contained inside of a closure.
There are two reasons to aim for goal (3):
a. Because output redirection through definition of a global ostream
object is neither convenient nor useful for environments where
the output is consumed by other code rather than being printed on
a terminal.
b. Because it may be desirable to, in some cases, ignore the `$print`
cells that are present in the netlist based on a runtime decision.
This is doubly true for an upcoming `$check` cell implementing
assertions, since failing a `$check` would by default cause a crash.
This avoids having to devirtualize them later to get performance back,
and simplifies the code a bit.
The change is prompted by the desire to add a similar observer object
to `eval()`, and a repeated consideration of whether the dispatch on
these should be virtual in first place.
The name `on_commit` was terrible since it would not be called, as one
may conclude from the name, on each `commit()`, but only whenever that
method actually updates a value.
This commit adds a reader/writer implementation for a file format
optimized for fast, single-pass storage and retrieval of design state
changes, as well as a recorder/replayer that integrate with the eval
and commit simulation steps to create replay logs and reproduce them
later.
This feature makes it possible to run a simulation once, recording
the stimulus as well as changes to the registers, and navigate to
a past time point in the simulation later without rerunning it.
Both the changes in inputs (stimulus) and changes in state are saved
so that navigation does not require calling `eval()` or `commit()`;
only a series of memory copy operations.
On a representative example of a SoC netlist, saving the replay log
while simulating it takes 150% of the time it would take to simulate
the same design without logging, which is a much lower overhead than
writing an equivalent full view (including memories) VCD waveform dump.
The replay log is also several times smaller than the VCD dump, and
more space savings are available as low hanging fruit.
Replaying the log has not been optimized and currently takes about
the same time as running the simulation in first place. However, it
is still useful since it provides fast navigation to an arbitrary time
point, something that rerunning the simulation does not allow for.
The current file format should be considered preliminary. It is not
very space-efficient, and my testing shows that a lot of time is spent
in the write() syscall in the kernel. Most likely, compression and/or
writing in another thread could improve performance by 10-20%. This
may be done at a later time.