One possibility of dealing with the increasing intricacies is to attempt to uncover the minimal and fundamental structures at the core of the problem under investigation. This procedure has already been applied to great success in the field of theoretical sciences. It accomplishes this by means of suitable abstractions. In a sense it continues an age old quest already present in the days of alchemists for the true name of something1 . The process of abstraction makes heavy use of mathematics and has seen applications, and thereby intensified development, for mathematical constructs which previously had been purely theoretical, such as topology and differential geometry. The desired benefit is a simplified presentation of the problem at hand at the cost of increasing the presumed knowledge, thereby dividing and encapsulating complexities.
Research and development have been greatly supported and accelerated by using the resources and opportunities made available by digital computers and the numerical methods they support. The desire for the availability of alternatives to analytical solutions of scientific problems is at least as old as the calculus used to describe them, as it proved difficult to impossible to provide solutions, when the number of degrees of freedom increases2 . However, applying the facilities offered by digital computers, especially in the field of solid state physics, may serve not only to promote the field of application, but also the advancement of the computing infrastructure itself as it is itself designed using the foundations of our understanding of physics. Thus advances obtained are fed back directly to obtain the tools employed for the advance, thereby making them more powerful and providing the means to tackle bigger or more complex problems.
The increase in complexity of both, the problem scenarios which need solution as well as the tools deployed for the solution is, however, not without problems. A failure to address this complexity properly may very well lead to a stall of the whole process as it begins to diverge. While science and engineering always strive for as simple as possible descriptions of physical models3 , it is not reasonable to expect the continuing trend of increasing complexity of descriptions to be reversed drastically, as more and more previously neglected details are taken into consideration. It therefore falls on the tools to be adapted to compensate for or at least deal with the rise in complexity.
While these same ideas have also been established in conjunction with software development, they more often than not remain isolated to a select few, who concern themselves (solely) with the theory of software development. The ideas and connected methods have not spread to be applied in the field of various scientific applications outside of this small circle of software specialists. This may in part be attributable to the fact that topics connected with software development are often viewed as unscientific4 and more as a craft which should be dealt with as swiftly as possible in scientific communities employing numerical schemes as tools. This results in the problem of the divergent development of what methodology has already been made available and what is actively deployed. This discrepancy is amplified to the level of a problem as now the underlying hardware, which, while evolving swiftly in resources and performance, has remained basically unchanged in programability and architecture, has now started to evolve and further diversify as multi-core CPUs and special purpose GPUs have become widespread.
A change in the underlying architecture naturally does not go without repercussions on the style of programming, which can be implemented efficiently. Of course vendors and other developers make an effort to provide compilers and libraries to reduce the inconvenience resulting for users and developers alike. To be effective, however, it is required to make use of the provided libraries and adapt the style of programming to such an extent that the compiler is provided with sufficient information to perform adaptations and optimizations for the target. These requirements are hard to meet in a highly conservative environment which often relies on libraries and interfaces from the ’70s of the preceding millennium, such as the Basic Linear Algebra Subprograms (BLAS), and continues to develop using a procedural programming.
The adoption of newer strategies and methods of tackling a problem more often comes by experiencing need for change. The continued push to increasingly intricate modelling of physical phenomena and the resulting demand for concise, abstract yet highly efficient descriptions alone is sufficient to provide ample incentive. The equations of evolution of physical systems are investigated for the classical case as well as for quantum mechanical settings with respect to their underlying structures. These structures should then provide a guide for a modelling with software components.