The 'e' language (not to be confused with 'E'), designed by Verisity, is a programming language targeted at hardware verification, implemented by their Specman Elite tool (which is, for the record, really quite expensive). It's ideally suited for creating testbenches for HDL models, and testing those models, thanks to several interesting features of the language:

  1. Builtin interface to logic simulator packages such as NCVerilog, Modelsim etc.
  2. Random generation of arbitrary data types, with linear constraint solving to generate data that meets some user-specified validity requirement.
  3. Event handling and processing, including a sophisticated temporal expression evaluation system to specify events in terms of sequences of other events.
  4. Functional coverage collection and analysis driven by the event system

The language provides a bare minimum of object oriented features, with no encapsulation and a form of inheritance which actually comes via its aspect oriented feature set.

Syntax

The syntax of the core language is based heavily on C, with some simplification. Uniquely, the language contains elements which can be used to extend the syntax of the language: its "preprocessor" is somewhere between cpp and yacc. This is used in the standard system to provide some syntactic sugar, and also some rather neat constructs including things borrowed from functional programming languages.

Lists are supported as a builtin data type (actually implemented by vectored dynamic arrays), as are keyed lists (almost, but not quite, a hash), and pleasingly they use a syntax for list literals which is identical to the syntax of structure members and statements within blocks in C: { item1; item2; }. Also, a semicolon is required after compound statement blocks, which keeps everything nice and regular.

Objects and Aspects

The object (or 'struct') model in e is superficially similar to C++ or Java (although data declarations are similar to Pascal or Ada). A declaration of the struct contains definitions of the struct's data members and methods:

  1 struct myStruct {
  2   myvar: int;
  3   getMyVar(): int is {
  4     result = myvar;
  5   };
  6 };

However, things soon get more complicated. The aspect oriented feature set includes the ability to extend previously declared structure types with additional data members (aspect variables as AspectJ would call them) and methods, and also extend the methods themselves. It's a very useful paradigm for use in a verification context, because verification is composed almost entirely of crosscutting issues.

Conditional extension, however, is where it starts to get hairy. And this is also how inheritance is implemented in e. Let's jump straight in at the deep end, and extend our 'myStruct' type from above so that we have a subtype which also has a 'myOtherVar' integer.

  1 extend myStruct {
  2   hasMyOtherVar : bool;
  3   when TRUE'hasMyOtherVar {
  4     myOtherVar: int;
  5     getMyOtherVar(): int is {
  6       return myOtherVar;
  7     };
  8   };
  9 };

If you followed that, your mind might be boggling by now. What we did was:

  1. We decided to extend the definition of a previously defined struct ('myStruct'). This code could be (and in fact almost certainly would be) in a completely different source file to the original definition.
  2. We add another data member, 'hasMyOtherVar'. This is unconditional, so every instance of myStruct in the system will also have this data member.
  3. We introduce a conditional part of the definition of myStruct. All of the following definitions only have effect when hasMyOtherVar is TRUE. So long as there are no other members of the struct (at least, none that are visible to this aspect) which could have the value TRUE (ie. no other bools) we could have simplified the condition to simply 'when TRUE. Which is kinda neat, when you think about it.
  4. Inside the conditional section, we now have a 'myOtherVar'. This is only valid if hasMyOtherVal is TRUE, and is otherwise completely invisible.
  5. Define a new method, which only exists on the struct when myOtherVal is TRUE and otherwise has no meaning.

In effect, we've subclassed myStruct. And in the process we've learned that trying to think in terms of classical object oriented design in e isn't going to get us very far.

Enumerated types can be extended by adding new values, which comes in particularly useful for exactly the situation we saw above. Methods can be extended, too:

  1 extend myStruct {
  2   getMyVar(): int is first {
  3     print("Retrieving value of myVar!\n");
  4   };
  5 };

And of course, all of these things can happen inside when conditional extensions. It's a risky way of doing things. Because structures and methods can be defined cumulatively in almost arbitrary order within and between source files, tracing execution and semantics becomes difficult. It can create serious maintenance headaches where more than one person is working with code, since the implicit interface definition between 'interface' and 'implementation' no longer exists as a tangible part of the language, and must be artificially (and consciously) reconstructed.

Generation and Constraint Solving

One of the most useful concepts in e is generation. The main focus of the language's designers is on functional verification, a large proportion of which is concerned with the generation of tests and test data.

Generating tests is hard for complex systems. For example, to test a microprocessor, we'd need to, essentially, generate code. We'd have to generate code which would behave in a well-defined manner (we don't want it reading any memory that's not physically implemented; there will likely be large sections of its instruction set architecture which have unpredictable results. And ideally, we'd like a test program to actually reach some defined "end" point at which we could say the test has passed or failed. Simply filling the program memory with random data and hitting 'Go' is unlikely to be an effective use of simulation time.

Instead, we'd like to define a set of constraints (along the lines of "make memory references point to some memory that really exists"), and e's keep construct allows us to do just that. With keep, we can specify an expression which the constraint solver will hold to be true while randomly generating the data items for a struct.

If we were to request the generation of a random myStruct, its 'myVar' values would be evenly spread over the integer range. Approximately half the time it would have a 'myOtherVar', which would also be spread over the integer range. But let's say, for the sake of example, that we want to keep myVar between zero and ten, and that if myVar happens to be 5, we really need the myStruct to have a myOtherVar. It's a somewhat arbitrary set of constraints, but it's precisely 22:42pm, and if I can't be abitrary at some arbitrary time of night, then when can I?

  1 extend myStruct {
  2   keep myvar >= 0 && myvar <= 10;
  3   keep (myvar == 5) => hasMyOtherVar;
  4 };

Lo and behold, every myStruct we generate from that moment on will satisfy those constraints. Unless, of course, we've accidentally specified a set of constraints that doesn't make sense, or which is contradicted by a later set of constraints, in which case the constraint solver detects the contradiction and gives us a ticking off for it.

Simulation interface

One of the things that makes e such a swiss army knife in the toolkit of a verification engineer is its ability to run as a PLI or VPI library inside a logic simulator, completely transparently to pre-existing e code. Having defined a struct with constraints to generate valid test data, we can then take that struct and apply its values to the actual design under simulation simply by enclosing the hierarchical signal name in single quotes inside specman, and assigning to it as if it were any e expression.

The same thing works in reverse, of course: reading the value of a signal name inside single quotes returns the value of the signal in the current simulation time.

Events, Threads and Time-Consuming Methods

To match up with the simulation semantics and simulated time, as well as the interface that's imposed by PLI and VPI, e has a notion of time. Much like in verilog, imperative code is considered to take no simulation time to execute. Since e is fundamentally imperative, though, with a top-level single thread of control (and doesn't, inherently, respond passively to events on signals: they have to be sampled explicitly), e introduces a cooperative multitasking thread model to allow signals to be monitored or polled in an imperative fashion.

To do this, an 'event' data type and a new type of method, a "time-consuming method", are introduced. The execution of a time-consuming method (TCM) spans more than one simulation event. A conventional method cannot call a TCM since this would imply the normal method might occupy simulation time.

Time passes in a TCM when a wait @event statement is executed. The syntax and semantics are suspiciously similar to those of verilog. A lot like a yield call in a cooperative multitasking environment, the wait statement causes the interpreter to check any other TCMs and execute them if possible, and if not, hand control back to the logic simulator to allow simulation to proceed until a TCM's wait condition is satisfied.

The similarity in simulation semantics to verilog means that it's viable to define a synntesizable subset of e, which can be converted into hardware. Significant work has been done in this direction, in fact: a single testbench environment for simulation and emulation and phhysical testing is highly valuable...

Temporal Expressions

The event system in e is actually quite rich, and a notation known as temporal expressions exists for specifying events in terms of other events, with temporal sequencing. They share a lot of common ideas and indeed syntax with regular expressions, and are very useful particularly for functional coverage analysis.

Functional Coverage

Handily enough, e provides facilites for specifying and measuring functional coverage of conditions in the design under test, and the test data applied to it.

The coverage system is driven by the event system: on a specified event, the system records any values which are significant to coverage, and adds them to its database, writing out the coverage data to disk at the end of its run.

Items to be covered on specified events can be specified individually, or the cartesian product of different parameters can be specified. Want to be sure you've seen every combination of A with every combination of B? Take the cross product! Some of those values are actually illegal and will never occur in the system? You can tell e this, and if they actually occur, it will flag an error.

Coupled to the coverage collection, specman comes with a tool to view and interactively explore the coverage data, showing which functional coverage points have not yet been tested and indicating which areas to direct further testing activity at.

Summing up...

Underneath all the "interesting" features, e is actually a fairly neat little language with a consistent syntax, which is actually a lot more fun to use than it has any right to be. The additional features, however, make the language pragmatic and insanely useful. If you happen to be doing functional verification, that is.