Virtual machines emerged during the 90s as the platform for developing frameworks and applications, offering large base class libraries, dynamic loading, and reﬂection. The design of these machines was inﬂuenced by the then dominant idea that processors would have maintained a Von-Neumann model while hiding non-Von Neumann aspects in their internal
structure. Recently Graphics Processing Units (GPUs), as well as the Cell BE architecture, have broken this assumption, exposing to programs forms of non-determinism going beyond the traditional model. These architectures are difﬁcult to target from the Just-In-Time compiler module of a Virtual Machine (VM) because their features (execution and memory models) are hidden from the abstraction layer provided by the intermediate language. This is a symptom of a diverging gap between the actual architectures and the abstract view offered by the VM that will eventually lead to the under-use of hardware resources by VM-based programs. In my research I introduce a set of types and meta-data to represent different parallel computations, so that programmers are freed from the task of specifying parallelism, communication, synchronization, etc. At runtime, through reﬂection a meta-program can evaluate the meta-data and generates required code for exploiting the special features of the underlying non-Von Neumann architectures.