MINIMOS-NT is always under development since there are always new ideas for additional features. However, in order to preserve the already achieved efforts, their functionalities have to be tested on a regular basis.
The MINIMOS-NT test (mmnttest) is a feature-based test: a given set of simulations is run and the produced output files are compared with references. In case differences are detected, the reason for them should be found out. The test is based on the following ideas:
Unfortunately, comparing output files is a cumbersome task. The main reason for difficulties is the limited representation of floating point numbers. The output files heavily depend on
Therefore, the mmnttest references have to be created multiple times to cover the range of development environments used at the institute.
Note that all vprojects have to be compiled with the same mode in order to ensure consistent numerical representations. In addition, compiling and linking should be restricted to one machine only, even if the full paths are specified in the configuration file.
The problem of the numerical representation has been partly resolved by changing the platform to AIX. This has one particular reason: The main development platform of the institute is Linux. So here a variety of compilers and compilation modes can be found, because each developer has personal preferences which are widely accepted and supported due to the academic context. But once the development is ready to be ported to AIX, all members of the development team use the same compiler and (almost) the same compiler configurations. Since AIX is not the major development platform, it is also easier to recommend respective guidelines.
Since the test can be run on more than one platform, also inter-platform comparisons can be made. These analyses can detect implementation errors which become obvious by running a binary compiled and linked by another compiler. As the SEILIB test system conveniently allows to repeatedly access the results files, several checks can be additionally performed during a post-processing of a test run. These checks may also include comparisons with references generated for other platforms.
As stated above, mmnttest is a feature-based test:
A test consists of general settings such as the name or id as well as of different models.
For all devices and all iteration schemes all model combinations are performed. The small optical test is given as example:
import seitestlib example = seitestlib.Example() example.newConstant("NAME", "Optical") example.newConstant("NAMEID", "OPT") example.newConstant("DEVICE", "Diode-PIN") example.newConstant("SCHEME", "DD") example.newConstant("LVL", 1) example.numCombinations = 2 example.newVariable("ID", [ "01", "02" ]) example.newVariable("J0", [ "1e12", "1e15" ]) example.newVariable("alpha", [ "0.5", "0.7" ])
In contrast to former mmnttest versions, a level system was introduced. A level is defined by the argument {LVL}. Generally, it was decided to provide nine levels, which are defined as follows:
The ninth level is reserved for deactivated models, for example unsuccessful simulations. The other two blocks are divided into four categories each based on the execution time. This time refers to a specific computer, compiler, and mode. At the moment there are over 800 tests available, thus, a complete run takes several days on a single computer. This would be obviously very inconvenient for fast tests during development. In addition, not all tests have to be run really every time. Despite some models are maybe still important in context of full tests, they can be skipped in order to speed-up pre-check-in tests. So before changes may be committed to the CVS repository, a full level 1-4 test should be performed. Level 5 to 8 are additionally tested by automated test runs, which are started independently of feature check-ins.
The -level option of the test script requires a range or level numbers, all separated by commas, for example:
Core of the test is the main library seitestlib.py, which provides two main classes: Example and CompleteTest. The former is derived from seiclass, the latter from Example (see Figure C.2). The new test library is therefore a full SEILIB application. This library is imported and used by each test script file. To run all tests automatically, the driver script mmnttest.py can be used.
Each test consists of several files:
The following directories, which may further depend on the host platform, are created by the library if they do not exist:
Obviously, there are connections between the general settings and filenames:
In the former versions, the input device names were connected with the test name, which is actually not necessary. Furthermore, this connection avoids a quite useful sorting by device classes such as diodes. In the new system, the device name must start with a class and continue with an optional description.
Since the test system combines input information to filenames, several conventions for naming test files must be adhered to. Devices start with a general device description, for example Diode, the first letter is capitalized. To further specify the device, for example the material Si-Si, an appropriate string is concatenated, separated by a hyphen. So it is easy to see all already existing devices of a specific category and to pick one for a new test if appropriate. Of course, the extension has to be pif.
Templates start with a capital letter, the name describes the test. The extension has to be ipd.tpl. Note that the name of the test is connected with the general setting name in the test script.
Test scripts start with a specific prefix, which is mmnt- for all tests of MINIMOS-NT, and a specific name of the test script. It is recommended to use the same name as for the template. The extension is .py.
As already shown above, the test script test-optical.py contains:
example.newConstant("NAME", "Optical") # connected with the template example.newConstant("DEVICE", "Diode-PIN") # connected with the device
A system was defined to create unique file names for the actual input and output files. To identify a test, two strings are used:
Why is {ID} a separate argument and not automatically generated by taking the combination numbers into account? First, related tests can so have a relationship expressed by the combination number. Second, skipped or added tests do not destroy already generated references (note that the combination number is part of the filenames). Based on the idea of the former versions, the device name and the iteration scheme are added to the filename.
References are created by specifying the -genref option together with a new reference directory: -refdir x. The directory x should be an existing directory with a descriptive name. This directory is not automatically created if it does not exist. However, the shell script automatically creates the following four subdirectories if they do not exist: x/ref_crv, x/ref_pif, x/ref_pbf, and x/ref_log.
Since the output log contains information about the simulation time and date, the output log of the simulations is not automatically compared. However, it might be interesting for manual comparisons.