Home → News 2018 June
 
 
News 2018 June
   
 

04.June.2018
OntoLix and OntoLinux Website update
The case of Jörg F. Wittenberg reached a critical level in the last weeks, so that we marked the link to the website of the Peer-to-Peer (P2P) Virtual Machine (VM) Askemos in the section Exotic Operating System of the webpage Links to Software with ** and the link to the introducing document titled "Askemos - A Distributed Settlement" with *.

Clarification
*** Work in progress - some more ordering and reordering; correct wording of last sections ***
When developing the integrating architecture of our Ontologic System (OS) we also incorporated High Performance and High Productivity Computing Systems (HP²CSs) including supercomputing systems or supercomputers, and designed our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) like them and as the foundation of other HP²CSs, as can be seen easily with the

  • links in some sections of the webpage Links to Software of the website of OntoLinux, including
    • operating systems, including
      • kernel-less operating systems (section Exotic Operating System), such as the
        • Kernel-less Operating System (KLOS),
        • kernel-less operating systems based on the Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE) approach, which says in the
          • document titled "Building Fundamentally Extensible Application-Specific Operating Systems in SPACE" in the chapter 8 Future Directions "We are also applying SPACE to multiprocessors and scalable parallel computers, particularly for the purpose of using SPACE to implement very high-performance applications such as parallel machine communication primitives and high-speed networking." and
          • document titled "Implementing Operating Systems without Kernels" in the chapter 7.1 Parallel O/S Requirements "At [a university] researchers are working in many areas of parallel computer system research, including communication and hybrid shared memory systems, and parallel languages and filesystems, as well as parallel operating systems. We propose to build a development environment in which these researchers can effectively conduct experiments with operating system services for high-performance computing. The kernel-less approach appears to be the appropriate approach for several reasons.",

          and

        • Exokernel with library Operating System (libOS),
      • microkernel-based operating systems (section Exotic Operating System), such as the L4 microkernel,
      • monolithic operating systems, such as the operating systems based on the Linux kernel and the BSD Unix kernel (section Operating System), providing
        • Unix domain sockets used for Inter-Process Communication (IPC) and
        • Internet sockets,

        and

      • distributed operating systems and parallel operating systems, such as the
        • kernel-less operating systems based on SPACE and
        • reflective, object-oriented, active object- and actor-based (concurrent and lock-free or non-blocking), (resilient) fault-tolerant, reliable, and distributed operating system Apertos (Muse),

      and providing

      • remote communication over the Internet Protocol (IP) suite, commonly known as Transmission Control Protocol (TCP)/Internet Protocol (IP) or simply TCP/IP (see Apertos (Muse)),
      • our exception-less communication mechanism,
      • our asynchronous I/O without context switch, and
      • service domains, which handle specific kernel services in user mode, such as providing a
        • networking stack, specifically a user space networking stack with zero-copy, zero-lock, and zero-context-switch, and
        • file system implementation, specifically a Zero-copy User-mode File System (ZUFS),

      and

    • distributed computing and parallel computing paradigms, including

    but also with the

  • integration of programming languages for the
    • shared memory (Symmetric MultiProcessing (SMP) and Non-Uniform Memory Access (NUMA)) and
    • distributed memory (e.g. cluster computing)

    programming models, including the

    • object-oriented programming language X10 for the Partitioned Global Address Space (PGAS) and Asynchronous Partitioned Global Address Space (APGAS) parallel programming models (see the Ontonics, OntoLab, Ontologics, OntoLix and OntoLinux Further steps and the OntoLinux Website update of the 19th of March 2012), and
    • C programming language extension Unified Parallel C (UPC) for the PGAS parallel programming model (see the Ontonics, OntoLab, Ontologics, OntoLix and OntoLinux Further steps and the OntoLinux Website update of the 27th of March 2012)

    (see also the OntoLinux Website update of the 6th of August 2012 and the OntoLix and OntoLinux Further steps of the 19th of July 2015), and

  • comparison of the related features of our OS with the parallel distributed file system Lustre (see the (OntoLix and) OntoLinux of the Further steps of the 29th of May 2008).

    In relation with HP²CSs the following two technologies are important to note:

  • Virtual Interface Architecture (VIA), which
    • is an abstract model of a user-level zero-copy network,
    • defines kernel bypassing and Remote Direct Memory Access (RDMA) in a network, and
    • provides the basis for the
      • InfiniBand (IB) networking standard,
      • internet Wide Area RDMA Protocol (iWARP), and
      • RDMA over Converged Ethernet (RoCE) protocol,

    and

  • Remote Direct Memory Access (RDMA) mechanism or technology, which
    • "supports zero-copy networking by enabling the network adapter to transfer data directly to or from application memory, eliminating the need to copy data between application memory and the data buffers in the operating system, which again requires no work to be done by the Central Processing Units (CPUs), caches, or context switches", and
    • is implemented with the IB standard, iWARP, RoCE protocol, and Soft IB, also know as Soft RoCE protocol, and also the Omni-Path Architecture (OPA).

    Indeed, as far as we can see, the VIA and IB came before our OS, but RDMA over Internet Protocol (IP) networks respectively over Internet sockets and Unix domain sockets used for IPC came with our OS and hence the protocols iWARP (TCP and SCTP; Internet Engineering Task Force (IETF) Request For Comments (RFCs) October 2007) and its successor RoCE (UDP and TCP with Soft RoCE as UDP tunnel), and also Soft IB (TCP) aka. Soft RoCE, as well as OPA came after our OS respectively with our OS. :o
    Needless to say, this has implications for software vendors, alliances, and foundations, that implement RDMA over Ethernet in their open source code and closed source code.

    The file system Lustre is

  • used for High Performance Computing (HPC), generally large-scale cluster computing,
  • structured around the RDMA mechanism or technology, and
  • originally based on the networking stack called Portals 3, which
    • "is a [M]essage [P]assing [I]nterface [(MPI)] to allow scalable, high-performance network communication between nodes of a parallel computing system",
    • "attempts to provide a cohesive set of building blocks with which a wide variety of upper layer protocols (such as MPI, [SHared MEMory and later Symmetric Hierarchical MEMory (]SHMEM[)], or UPC [and X10]) may be built", and
    • supports light-weight communication models, such as PGAS.

    VIA, RDMA, and Portals lead us directily to request-response protocols for Inter-Process Communication (IPC), such as for example the

  • Remote Procedure Call (RPC), specifically its implementation with
    • doors of the operating system Spring (see also the Clarification of the 18th of May 2018) and
    • portals of the SPACE approach once again,

    and

  • Remote Method Invocation (RMI), which is the object-oriented programming analog of the RPC protocol, specifically its implementation with the
    • Common Object Request Broker Architecture (CORBA) and
    • Service Object-Oriented Architecture (SOOA) Jini based on the Java technology,

    which we have developed further as well.

    Furthermore, an implication is that validated and verified, and validating and verifying, as well as resilient (e.g. fault-tolerant, trustworthy (e.g. available, reliable)), or distributed variants of

  • High Performance and High Productivity Computing Systems (HP²CSs),
  • active object model, active message model, actor model, and also actor-based and agent-based systems,
  • Multi-Agent Systems (MASs),
  • Swarm Computing Systems (SCSs),
  • Cognitive Agent Systems (CASs),
  • Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES),
  • Mediated Reality Environments (MedREs), and
  • Synthetic Reality Environments (SREs),

    and also validated and verified, and validating and verifying variants of

  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs), including
    • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs) based on the Byzantine Fault Tolerance (BFT) protocols or the Byzantine-Resilient Replication (BRR) method, including
      • distributed ledgers
        • blockchain-based systems with inter blockchain communication,

        and

      • Object-Oriented (OO 1) systems with object replication in the CORBA or the SOOA,

    are also included in our OS and therefore available for multiprocessor systems and multicore processors by design, obviously (see also the note Dump that island system of the 10th of May 2018 and the Clarification of the 11th of May 2018).

    {The explanation in this section is not quite correct. Some features were already included in for example the Open Multi-Processing (OpenMP) API and also the programming language X10 before the official presentation of our OS, but undecided is OpenMP to MPI and not OpenMP for non-shared memory systems for example.}
    In fact, the latter goes beyond e.g. the Open Multi-Processing (OpenMP) Application Programming Interface (API) and is also an original and unqiue concept of our OS, that does not differentiate between a local computing system, a remote computing system, and a distributed system, for example by applying distributed operating systems and parallel operating systems for parallel computing systems and supercomputing systems on

  • a single processor with multiple cores,
  • the whole Internet, when transforming it to a supercomputing system or supercomputer and eventually to the Ontologic Net (ON), or
  • the whole World Wide Web (WWW), when transforming it to a High Performance and High Productivity Computing System (HP²CS) and eventually to the Ontologic Web (OW),

    because our OS dissolves the boundaries of systems and is molecular or liquid in this relation as well due to our Ontologic(-Oriented) (OO 3) paradigm.

    This has also implications for other developments. Indeed, as far as we can see, the foundations for the User Level Failure Mitigation (ULFM) proposal came with our OS as well and hence the ULFM Message Passing Interface (ULFM-MPI) standard, specifically the Loopback (send-to-self) variant, and the variants based on the VIA and hence on RDMA, including InfiniBand, iWARP, RoCE, Soft IB and Soft RoCE, came after our OS.

    It's not a trick - It's Ontologics


    05.June.2018
    Website update
    We have substituted

  • the term distributed computing system with the term distributed system in a first step and
  • Reliable, Trustworthy, and Distributed System (RTDS) with Fault-Tolerant, Reliable, Trustworthy, and Distributed System (FTRTDS) in a second step

    in all publications of the months April 2018 and May 2018, because

  • distributed computing is the field of computer science, which studies distributed systems, and
  • trustworthiness, including reliability, does not comprise fault tolerance on the one hand, but resilience includes fault tolerance and trustworthiness on the other hand.

    These features of our Ontologic System (OS) originate from the reflective distributed operating systems Apertos (Muse) and TUNES OS (see also the Clarification of the 11th of May 2018).


    06.June.2018
    Style of Speed Further steps
    We have developed two new powered lift aircraft models, that are based on already approved and tested technologies with improved performances.
    In the next steps we will work on their attractive designs and also look for a place in our garage and for a place for our new garage.


    07.June.2018

    Investigations::Multimedia

    *** Work in progress - reduction, more comments, and better comparison with Muse, Apertos, TUNES, etc. ***

  • International Business Machines: We have new evidences in the field of High Performance and High Productivity Computing Systems (HP²CSs) including supercomputing systems or supercomputers, specifically in relation with asynchronicity, the concurrent programming paradigm, and the programming language X10.
    The issues are the additions of the Constraint Programming (CP or ConsP) and Concurrent Constrained Programming (CCP) paradigms(?!), the Resilient X10 and Elastic X10 functionalities, capabilities, extensions, the ULFM-MPI standard (see also the last section of the Clarification of the 4th of June 2018, some applications, specifically those related to simulation, graph processing, Actor- and Agent-Oriented Programming (AAOP), and also some more points directly copied from our Ontologic System (OS).

    We quote the binding document with the title X10: An Object-Oriented Approach to Non-Uniform Cluster Computing presented at OOPSLA'05:
    "We have designed a modern object-oriented programming language, X10, for high performance, high productivity programming of NUCC systems. [These makes very clear what the focus of X10 was before we presented our OS.]",
    "These [modern OO 1] languages[, such as Java and C#,] have also made concurrent and distributed programming accessible to application developers, rather than just system programmers. They have supported two kinds of platforms: a uniprocessor or shared-memory multiprocessor (SMP) system where one or more threads execute against a single shared heap in a single VM, and a loosely-coupled Distributed System (DS) in which each node has its own VM and communicates with other nodes using inter-process protocols, such as Remote Method Invocation (RMI) [... and Remote Procedure Call (RPC)].",
    "[...] future systems are rapidly moving from uniprocessor to multiprocessor configurations. Parallelism is replacing frequency scaling as the foundation for increased compute capacity. We believe future server systems will consist of multi-core SMP nodes with non-uniform memory hierarchies, interconnected in horizontally scalable cluster configurations such as blade servers. We refer to such systems as Non-Uniform Cluster Computing (NUCC) systems to emphasize that they have attributes of both Non-Uniform Memory Access (NUMA) systems and cluster systems. [Simply said, that was the HP²CS and not more and the rest came with our OS.]",
    "Current [Object-Oriented (]OO [1)] language facilities for concurrent and distributed programming, such as threads, the java.util.concurrent library and the java.rmi package, are inadequate for addressing the needs of NUCC systems. They do not support the notions of non-uniform access within a node or tight coupling of distributed nodes. Instead, the state of the art for programming NUCC systems comes from the High Performance Computing (HPC) community, and is built on libraries such as MPI [51 [Using MPI: Portable Parallel Programming with the Message Passing Iinterface]].",
    "[...] current HPC programming models do not offer an effective solution to the problem of combining multithreaded programming and distributed-memory communications. Given that the majority of future desktop systems will be SMP nodes, and the majority of server systems will be tightly-coupled, horizontally scalable clusters, we believe there is an urgent need for a new OO programming model for NUCC systems.",
    "The X10 effort is part of the IBM PERCS project (Productive Easy-to-use Reliable Computer Systems). [...] PERCS is using a hardware-software co-design methodology to integrate advances in chip technology, architecture, operating systems, compilers, programming language and programming environment design. [Guess where this truly originated from.]",
    "X10 is a "big bet" in the PERCS project. It aims to deliver on the PERCS 10× promise by developing a new programming model, combined with a new set of tools [...] []",
    "X10 is intended to increase programmer productivity for NUCC systems without compromising performance. X10 is a typesafe, modern, parallel, distributed object-oriented language, with support for high performance computation over distributed multi-dimensional arrays.",
    "To date, we have designed the basic programming model; defined the 0.41 version of the language (and written the Programmers' Manual); formalized its semantics [47 [Concurrent clustered programming. August 2005]] and established its basic properties; built a single-VM reference implementation; and developed several benchmarks and applications. [So here we have clear dates.]",
    "We expect the ongoing work on several cutting-edge applications, tools for the X10 programmer, and an efficient compiler and multi-VM runtime system [...]. [We are not sure what and where the multi-Virtual Machine runtime system is. But we explained that we view agents of a Multi-Agent System (MAS) as VMs as well.]",
    "Analyzability. X10 programs are intended to be analyzable by programs (compilers, static analysis tools, program refactoring tools). [...] Ideally, it should be possible to develop a formal semantics for the language and establish program refactoring rules that permit one program fragment to be replaced by another while guaranteeing that no new observable behaviors are introduced. With appropriate tooling support (e.g. based on Eclipse) it should be possible for the original developer, or a systems expert interested in optimizing the behavior of the program on a particular target architecture, to visualize the computation and communication structure of the program, and refactor it (e.g. by aggregating loops, separating out communication from computation, using a different distribution pattern etc). At the same time analyzability contributes to performance by enabling static and dynamic compiler optimizations. [We have here system development tools and a formal semantics of the language, but somehow we have to guess that formal modeling and refactoring come with Eclipse, specifically the Unified Modeling Language (UML), and if program refactoring has something in common with term rewriting, and also cannot see formal verification (see also the related comments given to the quotes related to dependent types and constraint types below). In addition, we have here only the substitution or refactoring of program fragments of one specific programming language but not of multiple or even arbitrary programming languages as allowed with our Ontologic-(Oriented) (OO 3) paradigm and Ontologic Programming (OP) paradigm.]",
    "Figure 1 outlines the software stack that we expect to see in future NUCC systems, spanning the range from tools and very high level languages to low level parallel and communication runtime systems and the operating system. X10 has been deliberately positioned at the midpoint, so that it can serve as a robust foundation for parallel programming models and tools at higher levels while still delivering scalable performance at lower levels. and thereby achieve our desired balance across safety, analyzability, scalability and flexibility.",
    "Figure 1: Position of X10 Language in Software Stack for NUCC Systems [shows] Components [-] Domain specific frameworks[, ...] Exploitation of scalable performance at lower levels of NUCC platforms [-] Low Level Parallel/Communication Runtime (MPI + LAPI + RDMA + OpenMP + threads) [-] Integration of high-performance threading and data transfer[, and] Operating System [-] Resource management in user space [LAPI of IBM is a one sided communication programming model, which simulates synchronous and standard communication behavior of MPI and provides completion of non-blocking communication signaled at both ends.(?) As we said before, that was the HP²CS and not more and the rest came with our Ontologic System, as also discussed further below and before in the Clarificaition of the 4th of June 2018.]",
    "2. Use the Java programming language as a starting point for the serial subset of the new programming model. [We took the Internet and the World Wide Web (WWW) as a starting point for the new Ontologic System (OS) with its Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), and a bunch of programming languages as a starting point for our new Ontologic Programming (OP) paradigm.]",
    3. Introduce a partitioned global address space (PGAS) with explicit reification of locality in the form of places. 4. Introduce dynamic, asynchronous activities as the foundation for concurrency constructs in the language. 5. Include a rich array sub-language that supports dense and sparse distributed multi-dimensional arrays.",
    "Another limitation of the Java language is that its mechanisms for intra-node parallelism (threads) and inter-node parallelism (messages and processes) are too heavyweight and cumbersome for programming large-scale NUCC systems. We decided to introduce the notion of asynchronous activities as a foundation for lightweight "threads" that can be created locally or remotely. Asynchronous activities address the requirements of both thread-based parallelism and asynchronous data transfer in a common framework. When created on a remote place, an asynchronous activity can be viewed as a generalization of active messages [55 [Active messages: a mechanism for integrated communication and computation]].",
    "Several programming language issues deemed not to be at the core were postponed for future work even though they were of significant technical interest to members of the language team. These include: the design of a new module and component system (eliminating class-loaders); [...] the design of a generic, place-based, type system integrated with dependent types (necessary for arrays); [...] design of a new Virtual Machine layer; [...] [But these do not include resilience, specfically fault tolerance, and all the other features that came with our OS.]",
    "We had originally intended to design and implement a technically sophisticated type system that would statically determine whether an activity was accessing non-local data. We soon realized that such a system would take much too long to realize. Since we had to quickly build a functional prototype, we turned instead to the idea of using runtime checks which throw exceptions (e.g. BadPlaceExceptions) if certain invariants associated with the abstract machine were to be violated by the current instruction. This design decision leaves the door open for a future integration of a static type system. [Obviously, the Productive Easy-to-use Reliable Computer Systems (PERCS) project has been started after the responsible entities saw something and had to act swiftly and without any preparations. But as we showed in this investigation, IBM was never able to catch up once again after we overtook with our OS. And finally, IBM was outpaced, because X10 came too late.]",
    "Briefly, X10 may be thought of as the Java language with its current support for concurrency, arrays and primitive built-in types removed, and new language constructs introduced that are motivated by high-productivity high-performance parallel programming for future non-uniform cluster systems.",
    "A place is a collection of resident (non-migrating) mutable data objects and the activities that operate on the data.",
    "X10 0.41 takes the conservative decision that the number of places is fixed at the time an X10 program is launched. Thus there is no construct to create a new place. This is consistent with current programming models, such as MPI, UPC, and OpenMP, that require the number of processes to be specified when an application is launched. We may revisit this design decision in future versions of the language as we gain more experience with adaptive computations which may naturally require a hierarchical, dynamically varying notion of places. [This has been added later as the Elastic X10 capability, though this should also refer to the cloud computing paradigm.]",
    "Places are virtual - the mapping of places to physical locations in a NUCC system is performed by a deployment step (Figure 1) that is separate from the X10 program. Though objects and activities do not migrate across places in an X10 program, an X10 deployment is free to migrate places across physical locations based on affinity and load balance considerations. [Somehow the term migrate is misleading in the context of places and should be substituted with the terms distribute or placed. See also Apertos (Muse).]",
    "While an activity executes at the same place throughout its lifetime, it may dynamically spawn activities in remote places [...]",
    "Note that creating X10 activities is much simpler than creating Java threads. In X10, it is possible for multiple activities to be created in-line in a single method.",
    "X10 has a global address space. This means that it is possible for any activity to create an object in such a way that any other activity has the potential to access it. In contrast, MPI processes have a local address space. An object allocated by an MPI process is private to the process, and must be communicated explicitly to another process through two-sided or one-sided communications. [See also RDMA and exception-less system calls, which are asynchronous system calls without context switch, as developed by us for our OS (see the Investigations::Multimedia of the 15th and 18th of May 2018, and the Clarification of the 4th of June 2018).]",
    "The address space is said to be partitioned in that each mutable location and each activity is associated with exactly one place, and places do not overlap. [See Cognac based on Apertos(?)]",
    "X10 supports a Globally Asynchronous Locally Synchronous (GALS) semantics [12 [TinyGALS: A Programming model for event-driven embedded systems]] for reads/writes to mutable locations. [See again our exception-less system call mechanism.]",
    "An unconditional atomic block is a statement atomic S, where S is a statement. [...] An atomic block is executed by an activity as if in a single step during which all other concurrent activities in the same place are suspended. [See also the atomic active object model in the Cognac system based on Apertos.]",
    "Compared to user-managed locking as in the synchronized constructs in the Java language, the X10 user only needs to specify that a collection of statements should execute atomically and leaves the responsibility of lock management and other mechanisms for enforcing atomicity to the language implementation.",
    "From a scalability viewpoint, it is important to avoid including blocking or asynchronous operations in an atomic block. An X10 implementation may use various techniques (e.g. non-blocking techniques) for implementing atomic blocks [...] [Also compare the atomic block technique with the atomic active object model in the Cognac system based on the actor-based (concurrent and lock-free or non-blocking) distributed operating system Apertos and the actor model in TUNES.]",
    "Like async statements, futures can be used as the foundation for many parallel programming idioms including asynchronous DMA operations, message send/receive, and scatter/gather operations.",
    "There are two classes of activities in SPECjbb, the master activity, which controls the overall program execution along a sequence of operation modes, and one or several warehouse/terminal threads that issue requests to a simulated database and business logic.",
    "In this section, we outline our plans for a multi-VM multinode implementation that can be used to deliver scalable performance on NUCC systems, using IBM's high performance communication library, LAPI [10].",
    "The language-based approach facilitates the definition of precise semantics for synchronization constructs and the shared memory model [7 [Threads cannot be implemented as a library]] [...].",
    "SHMEM [5], MPI [51], and PVM [23 [PVM - Parallel Virtual Machine: A Users' Guide and Tutorial for Networked Parallel Computing]] in contrast, are library extensions of existing sequential languages, as are programming models for Grid computing [21]. While there are pragmatic reasons for pursuing an approach based on libraries or directives, our belief is that future hardware trends towards NUCC systems will have a sufficiently major impact on software to warrant introducing a new language. ["PVM was a step towards modern trends in distributed processing and grid computing but has, since the mid-1990s, largely been supplanted by the much more successful MPI standard for message passing on parallel machines. [...] PVM is a software system that enables a collection of heterogeneous computers to be used as a coherent and flexible concurrent computational resource, or a "parallel virtual machine". The individual computers may be shared- or local-memory multiprocessors, vector supercomputers, specialized graphics engines, or scalar workstations and PCs, that may be interconnected by a variety of networks, such as Ethernet or [Fiber Distributed Data Interface (]FDDI[)]." Obviously, the focus is on NUCC systems. But no PVM and no grid computing also means no cloud computing and no view of the Internet and a collection of mobile devices (e.g. our Ontoscopes) as a supercomputing system or supercomputer or a HP²CS. Yes indeed, we included mobile computing as well at a time when mobile phones were only becoming so-called smartphones and when nobody was imagining at all that there will be mobile multiprocessor systems and mobile multicore processors, and bandwith for internet and mobile networking in the Gbit range, even more than the supercomputers had at that time.]",
    "X10's async and finish are conceptually similar to CILK's spawn and sync constructs.",
    "X10 is somewhat different, in that it introduces places as a language concept and abstraction used by the programmer to explicitly control the allocation of data and processing. A place is similar to a locale in Chapel and a site in Obliq [8]. [...] Chapel's model of allocation is different from X10 and Obliq because Chapel does not require that an object be bound to a unique place during its lifetime or that the distribution of an array remain fixed during its lifetime. [See also Apertos (Muse).]",
    "One of the key difference between X10 and earlier distributed shared object systems such as Linda [1], Munin [6], and Orca [4] is that those systems presented a uniform data access model to the programmer, while using compile-time or runtime techniques under the covers to try and address the locality and consistency issues of the underlying distributed platform. X10, in contrast, makes locality and places explicitly visible to the user [...]. [This feature is not convincing. Why should one make something explicitly when it is already implicitly and automatically done, especially when the goal is to increase performance and productivity? Surprisingly, we cannot see an integration of Autonomic Computing (AC), which is another approach presented by IBM some years before in the filed of server systems.]",
    "X10 activities communicate through the PGAS. [NUCC systems and shared memory.]",
    "X10 follows the GALS model when accessing the PGAS, and has two key benefits compared to other PGAS languages. [The following feature is also not convincing, because the deficits of the uniform way to access shared memory results from bad development and analyzation tools but not from the foundational approach. Furthermore, we also apply the Globally Asynchronous Locally Asynchronous (GALA) semantics for the same reasons given by the developers of X10, as can be seen with our exception-less system call mechanism, and much more, such as the utilization of Artificial Intelligence (AI), Model Checking (MC), simulation, Autonomic Computing (AC), and so on.]",
    "Clocks are a synchronization mechanism that is unique to X10. [...] There are certain restrictions on the usage of clocks that enable a compiler to verify that clock-based synchronization is free from deadlock [47 [Concurrent clustered programming]].",
    "It is conceivable that this work might lead us in a direction where it is natural to create more places dynamically (e.g. reflecting the need to create more concurrent locales for execution, based on the physics of the situation). [No, this has nothing in common with a reflective system property. Furthermore, see the meta space concept of the Cognac system based on Apertos. What is also required is a mechanism similar to provisioning and de-provisioning of resources, specifically compute nodes, which is called elastic computing in relation with the cloud computing paradigm. The developers of X10 found that out as well and later added the Elastic X10 capability, but only after we showed it first with our OS.]",
    "We intend to develop a rich place-based, clock-aware type system that lets a programmer declare design intent, and enables bugs related to concurrency and distribution to be captured at compile time.",
    "We are exploring the use of semantic annotations [13 [Semantic type qualifiers]]. For instance a method could be labeled with a modifier now to indicate that its execution does not invoke a resume or next operation on any clock. Similarly a modifier here on a statement may indicate that all variables accessed during the execution of the statement are local to the current place. [Semantic type qualifiers are ...]",
    "The application of X10 to dynamic HPC problems such as adaptive mesh refinement requires that the runtime dynamically manage the mapping from places to hardware nodes. We are contemplating the design of a "job control language" intended to interact with the continuous program optimization engine [9 [Multiple page size modeling and optimization]]. A programmer may write code in this language to customize the load-balancing algorithm. In an extension of the language which creates new places, such a layer would also specify the hardware location at which these places are to be constructed. [This comes close to grid computing and elastic computing. But honestly, we are not sure what is said here and why FEM, etc. requires the mapping of places to compute nodes, a job control language, and the multiple page size modeling and optimization.]", and
    "Future NUCC systems built out of multi-core chips and tightly-coupled cluster configurations represent a significant departure from current systems, and will have a major impact on existing and future software."

    We also quote the website of X10:
    "X10: Performance and Productivity at Scale [] X10 is a statically-typed object-oriented language, extending a sequential core language with places, activities, clocks, (distributed, multi-dimensional) arrays and struct types. All these changes are motivated by the desire to use the new language for high-end, high-performance, high-productivity computing. [High, high, high.]",

    Introducing X10
    "[...] how does the software deal with the stagnation of single threaded performance and cache memory, and how can the software utilize the additional capabilities provided by multiple cores on a chip? For some classes of applications, such as transaction-based systems, these trends are not problematic. These applications have natural parallelism and thus, can easily adapt to the multicore trend by having appropriate middleware map their parallelism to the multicore chips. However, for other classes of applications, the shift to requiring parallelism to obtain performance is a significant unwanted challenge. ",
    "Utilizing the cloud has emerged as an attractive and viable application development and deployment framework for commercial applications that must process vast amounts of data, utilizing hundreds of (possibly heterogeneous) cores. For these applications, parallelism - once an option - is now a requirement - and must be exploited to achieve historical increases in application performance that have also led to developer productivity improvements, which are key for developing more robust, sophisticated software applications. [But IBM does not give any motivations for grid computing and hence for cloud computing and functionality provided as a Service (aaS) by the design of X10, as can be clearly seen in the chapter 7.1 Language vs. Library of the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing" where the developers say that "SHMEM [5], MPI [51], and PVM [23] in contrast, are library extensions of existing sequential languages, as are programming models for Grid computing [21]. [...] While there are pragmatic reasons for pursuing an approach based on libraries or directives, our belief is that future hardware trends towards NUCC systems will have a sufficiently major impact on software to warrant introducing a new language.".]",
    [...]
    IBM Research is developing the open-source X10 programming language to provide a programming model that can address the architectural challenge of multiples cores, hardware accelerators, clusters, and supercomputers in a manner that provides scalable performance in a productive manner. The project leverages over nine years of language research funded, in part, by the DARPA/HPCS program.
    X10 is a class-based, strongly-typed, garbage-collected, object-oriented language. To support concurrency and distribution, X10 uses the Asynchronous Partitioned Global Address Space programming model (APGAS). This model introduces two key concepts -- places and asynchronous tasks -- and a few mechanisms for coordination. With these, APGAS can express both regular and irregular parallelism, message-passing-style and active-message-style computations, fork-join and bulk-synchronous parallelism. In contrast to hybrid models like MPI+OpenMP, the same constructs underpin both intra- and inter-place concurrency. [See also metaspaces of the Cognac system based on Apertos.]
    Both its modern, type-safe sequential core and simple programming model for concurrency and distribution contribute to making X10 a high-productivity language in the HPC and Big Data spaces. User productivity is further enhanced by providing tools such as an Eclipse-based IDE (X10DT). Implementations of X10 are available for a wide variety of hardware and software platforms ranging from laptops, to commodity clusters, to supercomputers.
    [...]
    An X10 Birds-of-a-Feather session at the October 2010 ACM SPLASH conference drew over 100 researchers (video). Courses and tutorials have been taught based on X10 at universities and major conferences in the US and abroad. The first X10 workshop, X10'11, was held at PLDI'11 in San Jose, CA on June 4, 2011. Subsequent X10 workshops (X10'12, X10'13, X10'14) were also co-located with PLDI; we plan to continue this tradition in 2015. [Important to note is the fact, that the programming language X10 has been developed as part of the Productive, Easy-to-use, Reliable Computing System (PERCS) project funded by the High Productivity Computing Systems (HPCS) program of the Defense Advanced Research Projects Agency (DARPA), which comprised the High-Performance Computing Challenge (HPC Challenge) and had the goal to advance computer speed a thousandfold, creating a machine that could execute a quadrillion (1015) operations a second, known as a petaflop - the computer equivalent of breaking the land speed record. And as with the Manhattan Project, the venue chosen for the supercomputing program. [A] pure High-Performance Computing (HPC) system or supercomputer and the reference implementation of the HPC Challenge Benchmark in C and MPI assumes that the system under test is a cluster of shared memory multiprocessor systems connected by a network. The fields of grid computing and cloud computing as well as the other fields of concern in relation with our OS were not recognized at all at that time in the year 2006. This strengthens our claims a further time.]

    X10 Roadmap
    [...]
    2.5.0 October 2014 X10 2.5 includes a redesign of several Place-related standard library APIs to better support Resilient and Elastic X10.
    [...]
    2.4.0 September 2013 X10 2.4 contains a number of significant language and class library changes that together enhance X10's ability to effectively exploit the increased memory capacity of modern computers. Specifically,

  • All of X10's array types (Rail, Array, DistArray, Region, Point, etc) now use Long (64-bit) indexing.
  • The default type of an integral literal (e.g. 3) is a Long instead of an Int.
  • The addition of new high-performance implementations of one dimensional and multi-dimensional arrays that are optimized for zero-based dense index spaces
    [...]
    2.4.1 December 2013 Bug fixes and performance improvements. First release with "Resilient X10" capabilities for managing Place failure.
    [...]
    2.2.0 May 2011 First "forwards compatible" release. The X10 language specification is now considered to be fairly stable, and we hope to make future language changes in a manner such that all valid X10 2.2 programs will still be valid in future releases.
    2.2.1 September 2011 Bug fixes and performance improvements.
    [...]

    X10 News
    APGAS for Scala Released
    We are happy to announce the release of the APGAS library for Scala. This library provides an implementation of X10's Asynchronous Partitioned Global Address Space (APGAS) programming model for resilient, elastic, parallel, and distributed programming as an embedded domain-specific language for Scala. It is based on the APGAS library for Java.
    [...]
    The main features of this release are improvements to Resilient X10 including significant performance improvements to the implementations of resilient finish, the addition of ULFM-MPI as a network transport for Resilient X10 applications, and enhanced standard library support for writing Resilient X10 applications and frameworks.
    [...]
    ScaleGraph is a graph library for large-scale graph processing built using X10.
    [...]
    A new kernel benchmark suite -- IMSuite: IIT Madras benchmark suite for simulating distributed algorithms has been released. IMSuite implements twelve classical distributed algorithms in two task parallel languages - X10 (x10-2.3.0) and HJ.

    Frequently Asked Questions

    General information about X10
    What is X10? [] X10 is a strongly typed, concurrent, imperative, object-oriented programming language designed for productivity and performance on modern multi-core and clustered architectures. X10 augments the familiar class-based object-oriented programming model with constructs to support execution across multiple address spaces, including constructs for a global object model, asynchrony and atomicity.
    What is the purpose of X10? [] X10 is a language designed to support programming at scale in the multicore era.
    [...]
    What is the origin of X10? [] X10 has been under developed as a research project at the IBM T. J. Watson Research Center since 2004, in collaboration with academic partners. The X10 effort is part of the IBM PERCS project in the DARPA program on High Productivity Computer Systems.
    [...]
    What is X10 especially good for? [] X10 is especially good at distributing your application over a cluster of distributed memory machines. In particular, with X10, there is a natural migration path from a single-threaded prototype, to a version that uses multiple cores on an SMP, to a distributed memory implementation that runs across a cluster of SMPs.
    [...]

    X10 for Developers
    How is X10 licensed? [] X10 is released under the Eclipse Public License, v1.0.
    How much does it cost? [] Nothing. X10 is released under the Eclipse Public License, v1.0. [Oh no, the Eclipse Public License is not an open source license accredited by our Society for Ontological Performance and Reproduction (SOPR), and due to the patent related clause we have in this case the same problem of incompatibility, as in the case of the GNU GPL v3, a fixed fee for each reproduction of a related part of our Ontologic System (OS) is due, and a share (actually proposed are 5%) of the overall revenue generated with an Ontologic Application or Ontologic Service (OAOS) is due for using X10.]
    [...]
    Is IBM using X10? [] X10 does not yet ship in any IBM products. There are efforts underway to develop X10-based application frameworks and middleware that are intended for eventual production use.
    [...]
    How can I contribute code to the X10 project? [] We welcome contributions from the community! All contributions will be licensed under the Eclipse Public License and must be accompanied by a Contributor's License Agreement. See Contributing to X10 for details on the process. [The Contributor's License Agreement is also not accredited by our Society for Ontological Performance and Reproduction (SOPR).]
    [...]

    What kind of programming language is X10? []
    [...]
    How does X10 compare with MPI? [] X10 is a higher level programming model than MPI. In general, X10 code should be much more concise than the equivalent MPI. There are at least two major philosophical differences between the MPI programming model and X10: the control flow, and the memory model.
    The MPI control flow model is SPMD (Single-Program Multiple Data): the program begins with a single thread of control in each process. In contrast, an X10 program begins with a single thread of control in the root place, and an X10 program spawns more threads of control across places using async and at.
    The MPI memory model is a completely distributed memory model. MPI processes communicate via message-passing. There is no shared global address space in MPI, so user code must manage the mappings between local address spaces in different processes. In contrast, X10 supports a global shared address space. While an X10 activity can only directly access memory in the local Place (address space), it can name a location in a remote place, and the system maintains the mapping between the global address space and each local address space.
    [...]

    Implementation
    [...]
    Why are there two compile backends? [...] In particular, the Java backend will support interoperability will Java code running on JVMs; the C++ backend will support interoperability with certain libraries for hardware accelerators and GPUs.
    [...]
    What happens if a place dies during the execution? Is it possible to detect it and recover the error? [] By default, the X10 runtime system is not robust with respect to Place failures. However, starting with X10 2.4.1 X10 can be run in a resilient mode. Resiliency adds some overhead to finish/async operations but enables the program to be notified of Place failures and continue running.
    [...]

    Language Details
    How does X10 deal with concurrency?
    X10 supports two levels of concurrency.
    The first level corresponds to concurrency within a single shared-memory process, which is represented by an X10 Place. Usually, you would use one Place per shared memory multiprocessor. The main construct for concurrency within a Place is the X10 "async" construct.
    The second level of X10 concurrency supports parallelism across Places, or analogously, across processes that do not share memory. Usually, this would correspond to concurrency across nodes in a cluster of workstations. The main construct for managing such concurrency in X10 is the "at" construct.
    Additionally, X10 provides various libraries and features to support particular concurrent operations and data structures, such as reductions and distributed arrays.
    [...]
    What operations are atomic? [] The "atomic" keyword marks operations that will perform atomically. Note that the X10 atomic keyword is an extremely heavy hammer: it grabs a lock that serializes all atomic operations in a Place. Usually, atomic should be used for prototyping, but it will probably not scale well in highly contended code.
    For better performance, the x10.util.concurrent library provides various atomic operation and locks, which are implemented more efficiently using operations such as compare-and-swap.
    How does X10 relate to PGAS?
    PGAS (Partitioned Global Address Space) is an abstract programming model, which presents an abstraction of a single shared address space, but the address space is partitioned into regions based on an underlying NUMA (non-uniform memory access) architecture. There are several PGAS languages, including X10, Fortress, Chapel, and UPC.
    APGAS (Asynchronous PGAS) is a variant of PGAS that supports asynchronous operations and control flow. X10 is an APGAS language.
    What is Place and how is it used? [] An X10 Place corresponds to a single operating system process. X10 activities can only directly access memory from the Place where they live; they must use constructs like "at" to access memory at remote places.
    Usually, in production, you would run with one X10 Place per node in a cluster of workstations. For debugging and development, it is possible to run with multiple Places installed in a single machine.
    [...]
    What are dependent types?
    A dependent type is a type that depends on a value in the underlying programming language. X10 uses dependent types in the X10 "constraint types". Often, X10 constraint types are used to enforce safety in multi-place code.
    [...]

    X10 History
    The genesis of the X10 project was the DARPA High Productivity Computing Systems (HPCS) program. As such, X10 is intended to be a programming language that achieves "Performance and Productivity at Scale." The primary hardware platforms being targeted by the language are clusters of multi-core processors linked together into a large scale system via a high-performance network. Therefore, supporting both concurrency and distribution are first class concerns of the program language design. The language must also support the development and use of reusable application frameworks to increase programmer productivity; this requirement motivates the inclusion of a sophisticated generic type system, closures, and object-oriented language features. Finally, like any new language, to gain acceptance X10 must be able to smoothly interoperate with existing libraries written in other languages. [See the reflective, active object- and actor-based (concurrent and lock-free or non-blocking), and distributed operating system Apertos (Muse) and their multilingual property. See also the quotes and comments given in relation to the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing".]

    X10 License
    X10 is released under the Eclipse Public License v1.0.

    A Brief Introduction To X10 [...] Copyright © 2012-2014

    Introduction
    [...]
    In the last few years this [standard] model [of computation] has been changed fundamentally, and forever. We have entered the era of concurrency and distribution.
    [...] Computer manufacturers have turned to multi-core parallelism, a radical idea for mainstream computing. [...] Multiple threads may now read and write the same locations simultaneously.
    [...] The widespread availability of cheap commodity processors and advances in computer networking mean that clusters of multiple computers are now commonplace. Further, a vast increase in the amount of data available for processing means that there is real economic value in using clusters to analyze this data, and act on it.
    [...]
    Consequently, the standard model must now give way to a new programming model. This model must support execution of programs on thousands of multi-core computers, with tens of thousands of threads, operating on petabytes of data.
    [...]
    Since 2004, we have been developing just such a new programming model. We began our work as part of the DARPA-IBM funded PERCS research project. The project set out to develop a petaflop computer (capable of 10^15 operations per second), which could be programmed ten times more productively than computer of similar scale in 2002. Our specific charter was to develop a programming model for such large scale, concurrent systems that could be used to program a wide variety of computational problems, and could be accessible to a large class of professional programmers. [And we have developed it as well but even transformed the whole Internet into a supercomputer.]
    [...]
    The programming model we have been developing is called the APGAS model, the Asynchronous, Partitioned Global Address Space model. It extends the standard model with two core concepts: places and asynchrony. The collection of cells making up memory are thought of as partitioned into chunks called places, each with one or more simultaneously operating threads of control. A cell in one place can refer to a cell in another - i.e. the cells make up a (partitioned) global address space. Four new basic operations are provided. An async spawns a new thread of control that operates asynchronously with other threads. An async may use an atomic operation to execute a set of operations on cells located in the current place, as if in a single step. It may use the at operation to switch the place of execution. Finally, and most importantly it may use the finish operation to execute a sequence of statements and wait for all asyncs spawned (recursively) during their execution to terminate. These operations are orthogonal and can nest arbitrarily with few exceptions. The power of the APGAS model lies in that many patterns of concurrency, communication and control - including those expressible in other parallel models of computation such as PThreads, OpenMP, MPI, Cilk - can be effectively realized through appropriate combinations of these constructs. Thus APGAS is our proposed replacement for the standard model. [See also Muse, Apertos, and Cognac based on Apertos, as well as Askemos. OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.]
    Any language implementing the old standard model can be extended to support the APGAS model by supplying constructs to implement these operations. This has been done for Java™ (X10 1.5), C (Habanero C), and research efforts are underway to do this for UPC.
    [...]
    It runs on the PERCS machine (Power 7 CPUs, P7IH interconnect), on Blue Gene machines, on clusters of commodity nodes, on laptops; on AIX, Linux, MacOS; on Parallel Active Message Interface (]PAMI[)], and [Message Passing Interface (]MPI[)]; on Ethernet and Infiniband [MPI, LAPI of IBM, RDMA, and OpenMP are mentioned in Figure 1 of the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing", but not PAMI of IBM, which was added at a later time. Also note that Ethernet was merely meant in conjunction with MPI but not with RDMA, while Infiniband was merely meant in conjunction with RDMA but not with Ethernet at that time (see once again the Clarification of the 4th of June 2018).].

    X10 Basics

    Types
    X10 is a statically type-checked language.
    [...]
    Constrained types are a key innovation of X10. A constrained type is of the form T{c} where T is a type and c is a Boolean expression of a restricted kind. c may contain references to the special variable self, and to any final variables visible at its point of definition. Such a type is understood as the set of all entities o which are of type T and satisfy the constraint c when self is bound to o. [See the OntoLix and OntoLinux Further steps of the 5th of March 2017 to find out how and why the relationship of the actor model and the Concurrent Constraint Programming (CCP or ConcConsP), Concurrent Logic Programming (CLP or ConcLP), Constraint Logic Programming (CLP or ConsLP), and Concurrent Constraint Logic Programming (CCLP) paradigms are integrated with each other and also in our OS since its official start in the year 2006. See also the Logic Programming (LP) language Prolog, which includes CLP since around the year 1987 and is one of the programming languages included in the SimAgent Toolkit, which again is integrated in our OntoBot software component and therefore in our OS, {and} the visual programming language ToonTalk, which adds the Concurrent Logic Programming (CLP or ConcLP) language Concurrent Prolog to our OntoBot and integrates it in our OS once again (see the (OntoLix and) OntoLinux Website update of the 30th of January 2008){, as well as Maude?}. The integration of visual CLP included in ToonTalk with the actor model and CCP included in TUNES through our Ontologic System Architecture (OSA) in the OntoBot and also in the OntoBlender software component results in the visual CCLP paradigm as a first by-product or implicit innovation. The document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing" does not mention anything related to constraint programming but only dependent types and semantic type qualifiers. Indeed, the approach of constrained types was introduced with the document titled "Constrained Types for Object-Oriented Languages" created around May 2008 and presented in October 2008. It seems to be that the developers of IBM have not seen CCP and CCLP in TUNES but only in ToonTalk. Somehow, the constrained types approach is a second by-product or implicit innovation of our OS, specifically its OntoBot software component, that shows once again the originality and uniquness of our work by the way.]

    The APGAS model
    [...]

    Async
    The fundamental concurrency construct in X10 is async:
    [...]

    Atomic
    [...]
    An atomic operation is an operation that is performed in a single step with respect to all other activities in the system (even though the operation itself might involve the execution of multiple statements).
    [...]
    The conditional atomic statement is an extremely powerful construct. It was introduced in the 1970s by Per Brinch Hansen and Tony Hoare under the name "conditional critical region".

    Places
    We come now to a central innovation of X10, the notion of places. Places permit the programmer to explicitly deal with notions of locality. [Ah, what ...? See Askemos.]

    Motivation
    Locality issues arise in three primary ways.
    First, consider you are writing a program to deal with enormous amounts of data - say terabytes of data, i.e. thousands of gigabytes. Now you may not have enough main memory on a single node to store all this data - a single node will typically have tens of gigabytes of main storage. So therefore you will need to run your computation on a cluster of nodes: a collection of nodes connected to each other through some (possibly high-speed) interconnect. That is, your single computation will actually involve the execution of several operating system level processes, one on each node. Unfortunately, acccessing a memory location across a network is typically orders of magnitude slower (i.e. has higher latency) than accessing it from a register on the local core. Further, the rate at which data can be transferred to local memory (bandwidth) is orders of magnitude higher than the rate at which it can be transferred to memory across the cluster.
    As with implicit parallelism, one could try to write extremely clever compilers and runtimes that try to deal with this memory wall implicitly. Indeed, this is the idea behind distributed shared memory (DSM). The entire memory of a collection of processes is presented to the programmer as a single shared heap. Any activity can read and write any location in shared memory. However, there are no efficient implementations of DSM available today.
    [...]
    A second primary motivation arises from heterogeneity. Computer architects are looking to boost computational power by designing different kinds of specialized cores, each very good at particular kinds of computations. In general, these accelerators interact with the main processor at arm's length.
    Two primary cases in point are the [...] Cell Broadband Engine ("Cell processor" for short), and general-purpose graphical processing engines (GPGPUs for short) [...]. [Essentially, a GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs. See the points "Multiprocessing (see Linux)" and "Parallel operating of graphic cards, and other multimedia cards from different manufacturers" listed in the Feature-List #1 on the website of our OS OntoLinux.]
    The third motivation is similar to the second, but involves only homogeneous cores. Multiple cores may share precious resources, such as L1 and L2 cache. To improve performance, it may make sense to bind activities to particular cores, in particular to force certain groups of activities to work on the same cores so that they can amortize the cost of cache misses (because they are operating on the same data). Or it may make sense to bind them to different cores that do not share an L2 cache so that the execution of one does not pollute the cache lines of the other.

    The at construct
    A place in X10 is a collection of data and activities that operate on that data. A program is run on a fixed number of places. The binding of places to hardware resources (e.g. nodes in a cluster, accelerators) is provided externally by a configuration file, independent of the program. [The problem is that in the Internet nodes are coming and going. In addition, the reflective property of our OS through the integration of Evoos and Apertos (Muse), allows to add activities and hence places and hence nodes at runtime, which also shows that grid computing and cloud computing were not of concern at first. As usually, we made no difference between the language-based approach to concurrency and distribution and the extension library extension-based approach.]
    [...]
    In X10 v2.5.0 all places are uniform. In future versions of the language we will support heterogeneity by permitting different kinds of places, with the ability to check the attributes of a place statically, and hence write code that depends on the kind of place it is running on.

    PlaceLocalHandle
    Another useful abstraction for a partitioned global address space is that of an abstract reference that can be resolved to a different object in each partition of the address space. Such a reference should be both efficiently transmittable from one Place to another and also be efficiently resolvable to its target object in each Place. A primary use of such an abstraction is the definition of distributed data structures that need to keep a portion of their state in each Place.

    The X10 Performance Model
    [...]
    Desirable characteristics of a performance model include simplicity, predictive ability, and stability across different implementations of the language. The performance model should abstract away all non-essential details of the language and its implementation, while still enabling reasoning about those details that do have significant performance impact. [Given is the reference "A performance model for X10 applications: what's going on under the hood? In Proceedings of the 2011 ACM SIGPLAN X10 Workshop, X10 '11".]

    Fundamental X10 Performance Model
    [...]
    X10 Type System
    The type systems of X10 and Java differ in three ways that have important consequences for the X10 performance model. First, although X10 classes are very similar to Java's, X10 adds two additional kinds of program values: functions and structs. Second, X10 's generic type system does not have the same erasure semantics as Java's generic types do. Third, X10 's type system includes constrained types, the ability to enhance type declarations with boolean expressions that more precisely specify the acceptable values of a type.
    Functions in X10 can be understood by analogy to closures in functional languages or local classes in Java. They encapsulate a captured lexical environment and a code block into a single object such that the code block can be applied multiple times on different argument values.
    [...]
    Constrained types are an integral part of the X10 type system and therefore are intended to be fully supported by the runtime type infrastructure. Although we expect many operations on constrained types can be checked completely at compile time (and thus will not have a direct runtime overhead), there are cases where dynamic checks may be required. Furthermore, constrained types can be used in dynamic type checking operations (as and instanceof). We have also found that some programmers prefer to incrementally add constraints to their program, especially while they are still actively prototyping it. Therefore, the X10 compiler supports a compilation mode where instead of rejecting programs that contain type constraints that cannot be statically entailed it, will generate code to check the non-entailed constraint at runtime (in effect, the compiler will inject a cast to the required constrained type). When required, these dynamic checks do have a performance impact. Therefore part of performance tuning an application as it moves from development to production is reducing the reliance on dynamic checking of constraints in frequently executed portions of the program. [Here we have not only dynamic checking of the programming language Java and its RunTime System (RTS) but also some parts of logic programming and model checking, which corresponds with the basic property of (mostly) being validated and verified and the related functionality of our OntoBot and OntoBlender software components. See also the section Formal Verification of the webpage Links to Software on the one side and the sections 2.3 Methodological Issues, 6.1 Reference Implementation, 7.3 Communication, and 8.1.2 Type System of the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing" (keyword check*) also quoted above/below to see once again that there was no such features in X10 before we presented our OS. In the section 2.3 Methodological Issues "We had originally intended to design and implement a technically sophisticated type system that would statically determine whether an activity was accessing non-local data. We soon realized that such a system would take much too long to realize. Since we had to quickly build a functional prototype, we turned instead to the idea of using runtime checks which throw exceptions (e.g. BadPlaceExceptions) if certain invariants associated with the abstract machine were to be violated by the current instruction. This design decision leaves the door open for a future integration of a static type system. It regards such a type system as helping the programmer obtain static guarantees that the execution of the program "can do no wrong", and providing the implementation with information that can be used to execute the program more efficiently (e.g. by omitting runtime checks).", in section 6.1 Reference Implementation "The translator consists of an X10 parser (driven by the X10 grammar), analysis passes, and a code emitter. The analysis passes and code templates are used to implement the necessary static and dynamic safety checks required by the X10 language semantics. [...] The X10 RTS is written in the Java programming language and thus can take advantage of object oriented language features such as garbage collection, dynamic type checking and polymorphic method dispatch.", in the section 7.3 Communication we merely have "Second, the dynamic checking associated with the Locality Rule helps guide the user when an attempt is inadvertently made to access a non-local datum. We expect this level of locality checking to deliver the same kinds of productivity benefits to programming NUCC applications that has been achieved by null pointer and array bounds checking in modern serial applications.", and the section 8.1.2 Type System we merely have "We intend to develop a rich place-based, clock-aware type system that lets a programmer declare design intent, and enables bugs related to concurrency and distribution to be captured at compile time. For instance we intend to develop a type-checker that can ensure (for many programs) that the program will not throw a ClockUseException."]

    Distribution
    An understanding of X10 's distributed object model is a key component to the performance model of any multi-place X10 computation. In particular, understanding how to control what objects are serialized as the result of an at can be critical to performance understanding.
    Intuitively, executing an at statement entails copying the necessary program state from the current place to the destination place. The body of the at is then executed using this fresh copy of the program state. What is necessary program state is precisely defined by treating each upwardly exposed variable as a root of an object graph. Starting with these roots, the transitive closure of all objects reachable by properties and non-transient instance fields is serialized and an isomorphic copy is created in the destination place. [See once again Muse "template: A template for an object which is created by the class is described. A template is used [...] for migrating an object from/to another host. When an object migrates to another host, the internal representation of an object is converted to one which is suitable for the target hardware." and Apertos, and Cognac based on Apertos for everything related to migration. The document does not contain a term with the letters graph* or serializ*.]

    Async and Finish
    [...]
    Therefore, the primary role of the language implementation is to manage the efficient scheduling of all the potentially concurrent work onto a smaller number of actually concurrent execution resources. [meta-scheduling of Cognac based on Apertos(?!)]

    X10 [...] Implementation Overview
    [...]
    Wide platform coverage both increases the odds of language adoption and supports productivity goals by allowing programmers to easily prototype code on their laptops or small development servers before deploying to larger cluster-based systems for production.

    X10 [...] Runtime
    Figure 4.2 depicts the major software components of the X10 runtime. The runtime bridges the gap between X10 application code and low-level facilities provided by the network transports (PAMI etc.) and the operating system. The lowest level of the X10 runtime is X10RT which abstracts and unifies the capabilities of the various network layers to provide core functionality such as active messages, collectives, and bulk data transfer. [The Deep Computing Messaging Framework (DCMF) is an architecture or transport protocol for the Blue Gene/P Communication and is used to implement MPI(CH2), Charm++, ARMCI, GASNet, etc.. It provides active-messages, RDMA and collectives. See also the Clarification of the 4th of June 2018 to find out what it did not include before we presented our OS. See also the meta-object level and meta-meta-object level of Apertos (Muse).]

    Distribution
    The X10 [...] runtime maps each place in the application to one process. [...] Each process runs the exact same executable (binary or bytecode).
    [...]
    X10RT. [] The X10 v2.5.0 distribution comes with a series of pluggable libraries for inter-process communication referred to as X10RT libraries [...]. The default X10RT library--sockets--relies on POSIX TCP/IP connections. The standalone implementation supports SMPs via shared memory communication. The mpi implementation maps X10RT APIs to MPI [...]. Other implementations support various IBM transport protocols (DCMF, PAMI).
    AsyncCopy. [] The X10 v2.5.0 tool chain implements at constructs via serialization. The captured environment gets encoded before transmission and is decoded afterwards. Although such an encoding is required to correctly transfer object graphs with aliasing, it has unnecessary overhead when transmitting immediate data, such as arrays of primitives. As a work around, the X10 v2.5.0 x10.lang.Rail class provides specific methods--asyncCopy--for transferring array contents across places with lower overhead. These methods guarantee the raw data is transmitted as efficiently as permitted by the underlying transport with no redundant packing, unpacking, or copying. Hardware permitting, they initiate a direct copy from the source array to the destination array using RDMAs. [...] [In fact, this description also includes the internet Wide Area RDMA Protocol (iWARP), RDMA over Converged Ethernet (RoCE) protocol, and Soft InfiniBand, also know as Soft RoCE protocol. But do not be confused and instead see once again the Clarification of the 4th of June 2018 as well as the Figure 1 of the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing" and the comments given to related quotes in this investigation. At this point we can also see how companies attempt to steal incrementally one feature after the other of our OS.]

    Getting Started with X10
    The simplest way to start out with X10 is by using X10DT and Managed X10 (X10 compiled to Java). X10DT is an integrated development environment based on Eclipse and gives you an X10-aware editor, and integrated compile/execute/debug capabilities.
    [...]
    If a binary distribution is not available for your system, or if you would like to use some of the extended features of X10 (such as running on top of MPI, or compiling X10 programs to execute on GPGPUs), then you will need to build X10 from source.

    Elastic X10
    Elastic X10 is a new feature introduced in the X10 2.5.0 release. Elastic X10 allows places to be added to a computation while it is running. The new places join existing ones, and your program will see the new places via calls to Place.numPlaces() and Place.places() once the new place joins. Elastic X10 is currently only supported in Managed X10, when using the default JavaSockets transport.
    Writing your program to make use of Elastic X10 is similar to Resilient X10. ["In cloud computing, elasticity is defined as "the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible".[...] Elasticity is a defining characteristic that differentiates cloud computing from previously proposed computing paradigms, such as grid computing. The dynamic adaptation of capacity, e.g., by altering the use of computing resources, to meet a varying workload is called "elastic computing". That explanation is wrong in part, because this functionality is already provided by the autonomic and dynamic management (including provisioning and de-provisioning) and job/batch scheduling (including adaption) of grid computing systems. For example, MOSIX2 for HPC "is particularly suitable for: "Efficient utilization of grid-wide resources, by automatic resource discovery and load-balancing [and] Running applications with unpredictable resource requirements or run times". For us cloud computing is simply said a marketing term for something like grid computing as a Service or grid computing for everybody on the one hand and more important here is to show that the term originated from the field of cloud computing on the other hand. Also note that one of the basic properties of our OS is (mostly) being self-adaptive and an element is grid computing. Obviously, elasticity is a feature of our OS since its beginning.]

    Performance Tuninng
    [...]

    Properly configuring the X10 Runtime
    The X10 runtime internally executes asyncs by scheduling them on a pool of worker threads within each place. By default, the X10 runtime only creates a single worker thread for place. To exploit multiple cores within a place, you must set the X10_NTHREADS environment variable to the desired number of worker threads to properly exploit the additional cores. A good rule of thumb is to create one X10 worker thread per available core. [See atomic active object of Apertos and Cognac System based on Apertos.]
    [...]

    Selecting the right X10RT Implementation
    The sockets implementation of X10RT [relies on POSIX TCP/IP connections and] is supported on all platforms, but multi-place programs using it may not perform as well as alternative transports (higher latency, lower bandwidth). If it is available for your platform, use [PAMI] instead of sockets. As a second choice, use the MPI-based implementation of X10RT.

    X10 on GPUs
    Summary of CUDA Support
    Using the X10/CUDA backend, one can identify fragments of an X10 program to run on the GPU. For ideal workloads, this can give a speedup of up to 30x or more.
    The idea behind the X10/CUDA implementation is to expose the low level CUDA fundamentals in as direct a fashion as possible, to maximize the ways in which the backend can be used, and to present as few surprises as possible to programmers. To support this, we have also striven to change X10 minimally [...] [Firstly, the initial release of CUDA was on the 23rd of June 2007. Secondly, we did it before as can also be seen once again with the points "Multiprocessing (see Linux)" and "Parallel operating of graphic cards, and other multimedia cards from different manufacturers" listed in the Feature-List #1. We have not listed these points just for fun and without any deeper reasons.]
    For a detailed technical description of how X10 is compiled to CUDA to run on GPUs, please see [] The X10'11 Workshop paper GPU Programming in a High Level Language Compiling X10 to CUDA
    [...]
    However since it is not unlikely that other kinds of accelerators will be supported in future (e.g. OpenCL) we will not guarantee that every child of a place is a CUDA GPU.

    X10RT Implementations
    Sockets [] An open-source implementation that uses TCP/IP sockets and ssh to support multiple places on one or more hosts. This is the default implementation, and is the only option when using Managed X10.
    [...]
    MPI [] An implementation of X10RT on top of MPI2. It is fully open source and can be found in x10.runtime/x10rt/mpi. This supports all the hardware that your MPI implementation supports, such as Infiniband and Ethernet.
    PAMI [] PAMI is an IBM communications API that supports high-end networks such HFI (Host Fabric Interface), Blue Gene[/Q Supercomputer], Infiniband, and also Ethernet.
    [...]
    The default is sockets on all platforms except Blue Gene/Q (which defaults to [PAMI]). All platforms except Blue Gene support standalone and sockets. [Communication implementation from IBM exploits an enhanced API, IBM PAMI, which is designed to increase parallel application performance on clusters made up of Power Systems servers. InfiniBand adapter affinity support helps to improve the mapping of tasks to logical CPUs. ; IBM Spectrum MPI, which is based on Open MPI, delivers an improved, Remote Direct Memory Access (RDMA)-capable PAMI (Parallel Active Messaging Interface) using [...] Open Fabric Enterprise Distribution (OFED) on both x86-64 and IBM POWER hardware. See once again the Clarification of the 4th of June 2018.]

    Applications and Libraries Using X10
    ScaleGraph: a graph library providing large-scale graph analysis algorithms and efficient distributed computing framework for graph analysts and for algorithm developers (Tokyo Institute of Technology)
    Global Matrix Library: a library for distributed dense and sparse linear algebra (IBM)
    ANUChem: a collection of computational chemistry codes for quantum chemistry and molecular dynamics simulations (Australian National University / IBM)
    IMSuite: a suite of twelve classical distributed algorithms as benchmark kernels, used to evaluate different forms of parallelization and synchronization (IIT Madras)
    X10-based Agent Simulation on Distributed Infrastructure (XASDI): a platform for a massive agent-based simulation (IBM)
    Megaffic: an agent-based simulator of traffic flows using the XAXIS agent framework (IBM)
    SatX10: a framework for parallel boolean satisfiability (SAT) solving (IBM)
    [...]
    Armus-X10: a framework for distributed deadlock detection and avoidance

    Applications
    Chandra Krintz, UC Santa Barbara (IBM Team Contact: Steve Fink)
    We will integrate X10 into AppScale - an open-source emulation of the PaaS (Platform-as-a-Service) cloud APIs of Google App Engine. This will include integrating X10 (a) as a front-end development language (an alternative to Python and Java) that facilitates easy development of parallel and distributed applications as user-facing computations that work well in the cloud; (b) as a Tasks API language which will facilitate type-safe, parallel programming of web applications and services for efficient and scalable background computing; (c) as a language users employ for writing their own mappers and reducers, as well as the language for implementation of MapReduce in the AppScale Backend; and (d) as the implementation language for the parallel and concurrent activities within AppScale itself (for MapReduce, request handling, data processing, and other services within the AppScale control center). We also will extend the AppScale debugging and testing system with support for X10; and we will extend scheduling support in AppScale to handle execution of X10 tasks in parallel in support of the front end web service (in addition to scheduling other tasks, MapReduce jobs, internal components, database replicas, and front-end instances). As part of these pursuits, we will investigate application-specific compiler and runtime optimizations for performance, efficiency, and scaling, scheduling and load balancing techniques across both single and multiple machines.
    Publications
    Neptune: A Domain Specific Language for Deploying HPC Software on Cloud Platforms. Chris Bunch, Navraj Chohan, Chandra Krintz, and Khawaja Shams. ScienceCloud'11, June 2011.
    Admela Jukan, University of Braunschweig, Germany (X10 team contact: Vijay Saraswat)
    In this project, we will incorporate X10, a new parallel programming language, into our SAGE environment to enable multiple media streams to be composed for visualization on multiple displays without networking bottlenecks in a cluster-based visualization infrastructure of the HPDMnet lab.
    David Bader, Georgia Tech (X10 team contact: Vijay Saraswat)
    X10 offers exciting opportunities for analyzing massive graph-structured data. We are implementing parallel massive graph analysis tools on the STINGER data structure, a block-based data structure well suited for PGAS models. STINGER supports analysis of both static and dynamic graphs, targeting beyond-Facebook-scale analysis both in size and rate of change. X10's global control view over potentially distributed graph data helps graph algorithms remain more recognizable than in other PGA and message-passing models, and X10's general purpose language permits more analysis kernels than does map reduce.
    Shishir Nagaraja, Indraprastha Institute of Information Technology, Dehli, India (X10 team contact: Vijay Saraswat)
    Prof. Nagaraja is engaged in research into P2P botnets made famous yet again by post-wikileaks network attacks. His group seeks to understand the fundamental limits of the P2P technologies used as botnet foundations and design effective botnet countermeasures. The IBM X10 award will continue to fund research in this direction. The understanding of efficiency, robustness and resilience to attacks of various decentralised botnet architectures along with development of novel techniques that will deal with the sea of uncertainty that comes from building a system out of unreliable and sometimes untrustworthy components will be game changing initiatives in dealing with the problem of botnet defense. A significant component of this work is large-scale statistical traffic analysis dealing with terabytes of traffic on a daily basis. The IBM X10 award will fund a specific part of his group's research agenda on developing concurrent techniques for performing such analysis on ISP scale traffic. Specifically, on spatial and temporal communication pattern analysis and understanding botnet structure. The development of such highly concurrent large scale systems will play a key role in engendering cooperative cooperative detection of botnets in the near future.
    Guangwen Yang, Li Liu, Tsinghua University, China (X10 team contact: Yan Li)
    [...]
    Third, we will try to propose and design some programming tools for bioinformatics in X10, such as efficient task scheduling between multiple cores and multiple nodes.
    Yu Wang, Hong Luo, Tsinghua University, China (X10 team contact: Yan Li)
    In this project we propose a new MapReduce framework implemented in X10, which can be a general solution to accelerate MapReduce algorithms and provide both efficient hardware architecture and an interface to the OS. This framework is constructed by multi-node heterogeneous cluster, where each node is a reconfigurable heterogeneous multi-core system, including multi-core CPU, FPGA and GPU. The "mapper" and "reducer" can be implemented by these computing systems, and they are configurable. The proposed MapReduce framework can act as a common platform for research in the parallel computing area.
    [...]
    Hui Liu, Shandong University, China (X10 team contact: Haibo Lin)
    [...]
    We plan to construct a problem solving environment and develop scientific computing applications for financial risk measurement in X10. [...]
    [Obviously, IBM has taken our OS and funded related research activities to confuse the public deliberately about the true origin of our original and unique works of art. As can also be seen easily, all descriptions of these applications are already infringing our copyright, for which IBM is also responsible as is the case for their publications on the website of X10.]

    Tool Development
    [...]
    Jens Palsberg, UCLA (X10 team contact: Olivier Tardieu)
    We want to provide optimization and verification techniques for parallel programs that are on par with today's standards for sequential programs.
    [...]
    [There were some more projects listed on the related webpage. Obviously, IBM has taken our OS here as well, as can be seen easily with the sections Formal Verification and Software Development Tool of the webpage L2SW.]

    Frameworks & Libraries
    Eli Tilevich, Virginia Tech (X10 team contact: Igor Peshansky)
    The stated design goal for X10 is to improve development productivity for parallel applications. When it comes to constructing advanced cloud applications, X10 must provide facilities to systematically express essential non-functional concerns, including persistence, transactions, distribution, and security. In a cloud-based application, non-functional concerns may constitute intrinsic functionality, but implementing them in X10 can be non-trivial. To streamline the implementation of non-functional concerns, the Java development ecosystem features standardized, customizable, and reusable abstractions called frameworks. Java frameworks are a result of a concerted, cooperative, and multi-year effort of multiple stakeholders in the Java technology, which has been tested and proven effective by billions of lines of production code. This project will explore how Java frameworks can be automatically adapted for use by X10 programmers. [...]
    Xiaoyun Chen, Longjie Li, Lanzhou University, China (X10 team contact: Yan Li)
    This project aims to research and improve data clustering algorithms, which will be developed as application program class libraries using X10. [...]

    Mapping high level languages
    Frank Pfenning, CMU (X10 team contact: Vijay Saraswat)
    L10 is a platform for experimenting with the design of distributed and parallel logic programming. The "logic" in traditional logic programming is a minimal first-order logic, but the logic underlying L10 is a richer logic in which every fact or deduction has an intrinsic notion of location. A logic with locations is significant because it gives a logical justification for the use of (locally) stratified negation. In L10, we are also interested in exploring the use of location to express parallelism in a logic programming language by flexibly mapping locations in L10 to places in X10, a parallel programming language being developed by IBM Research. [See also the Caliber/Calibre, which is about space and time.]

    Curriculum Development
    [...]
    Steven Reiss, Brown University (X10 team contact: Bard Bloom)
    Dr. Reiss will teach the course Programming Parallel and Distributed Systems course using X10 as a unifying language. The course covers a range of topics, including lightweight threads on multi-core machines, large-scale distribution on clouds, and high-performance parallel computing on supercomputers. X10 will serve as a common language for the course, as it is designed to work well in all these domains. Students will get experience programming multi-core, distributed systems, and supercomputer platforms and will look at real applications in each of these domains. [But High Performance Cloud Computing or HPC in the Cloud (HPCC) was not envisioned at all and hence the view of the successor of the Internet as a supercomputer respectively our ON and the successor of the World Wide Web (WWW) as HP²CSs respectively our OW were also not there.]
    [...]
    Ce Yu, Tianjin University, China (X10 team contact: Haibo Lin)
    This project consists of two topics: 1) Research on migrating astronomical computing algorithms to X10 on a super computer, and 2) Parallel Computing course development to introduce the X10 programming language. The research on astronomical computing algorithms is based on our long term cooperation with astronomers. [...]
    [There were some more projects listed on the related webpage. Obviously, IBM has taken our OS as bluepring here as well, as can be seen easily with the section Astronomy of the webpage L2SW.]

    Performance & Concurrency
    Cormac Flanagan, UCSC (X10 team contact: Vijay Saraswat)
    X10 is is a modern, type-safe programming language for highly-parallel, distributed, and petascale computing. A distinguishing feature of X10 is its generic and dependent type system, which statically identifies common programming errors involving array dimensions, etc, and which distinguishes local and remote data. Despite these benefits of the X10 language and development environment, concurrent programming remains a challenging task that is still prone to some traditional pitfalls, including races, atomicity violations, and determinism violations. The goal of this project is to strengthen X10's rich type system to statically verify these three fundamental correctness properties of race-freedom, atomicity, and determinism. Our particular focus is on pointer-rich data structures, such as lists, trees, and graphs.

    [Obviously, IBM has taken our OS here as well, as can be seen easily once again with the section Formal Verification of the webpage L2SW.]

    Publications [We quoted the most relevant works of this webpage in reversed order to show the temporal development.]
    [...]
    2008
    2. Constrained types for object-oriented languages. Nathaniel Nystrom, Vijay Saraswat, Jens Palsberg, Christian Grothoff, OOSPLA, October 2008.
    [...]
    2009
    16. Constrained Kinds. Nathaniel Nystrom, Olivier Tardieu, Igor Peshansky, Vijay Saraswat. Technical Report, 2009
    [...]
    2010
    [...]
    4. A Proof System for a PGAS Language. Shivali Agarwal1 and R.K. Shyamasundar. Concurrency, Compositionality and Correctness 2010.
    2011
    [...]
    17. Design and Implementation of a [Domain-Specific Language (]DSL[)] based on Ruby for Parallel Programming. Tetsu Soh. Master Thesis, Graduate School of Information Science and Technology, The University of Tokyo, January 2011.
    [...]
    12. X10 as a Parallel Language for Scientific Computation: Practice and Experience. Josh Milthorpe, V. Ganesh, Alistair P. Rendell and David Grove. IEEE International Parallel and Distributed Processing Symposium, May 2011.
    [...]
    11. GPU Programming in a High Level Language Compiling X10 to CUDA. by David Cunningham, Rajesh Bordawekar, and Vijay Saraswat. ACM SIGPLAN 2011 X10 Workshop, June 2011.
    10. Distributed deductive databases, declaratively: The L10 logic programming language. by Robert Simmons, Frank Pfenning, and Bernardo Toninho. ACM SIGPLAN 2011 X10 Workshop, June 2011.
    [...]
    4. A Performance Model for X10 Applications. David Grove, Olivier Tardieu, David Cunningham, Ben Herta, Igor Peshansky and Vijay Saraswat. ACM SIGPLAN 2011 X10 Workshop, June 2011.
    [...]
    2. Neptune: A Domain Specific Language for Deploying HPC Software on Cloud Platforms. Chris Bunch, Navraj Chohan, Chandra Krintz, and Khawaja Shams. ScienceCloud'11, June 2011.
    1. Evaluating the Performance and Scalability of MapReduce Applications on X10. Chao Zhang, Chenning Xie, Zhiwei Xiao, and Haibo Chen. APPT 2011 - 9th International Conference on Advanced Parallel Processing Technologies, September, 2011
    2012
    12. X10X: Model Checking a New Programming Language with an "Old" Model Checker. Milos Gligoric, Peter C. Mehlitz, and Darko Marinov. ICST 2012 - 5th International Conference on Software Testing, Verification, and Validation, Montreal, Canada, April 2012.
    [...]
    17. Introducing ScaleGraph: An X10 Library for Billion Scale Graph Analytics. Miyuru Dayarathna, Charuwat Houngkaew and Toyotaro Suzumura. ACM SIGPLAN 2012 X10 Workshop, June 2012.
    [...]
    15. X10-based Massive Parallel Large-Scale Traffic Flow Simulation. Toyotaro Suzumura, Mikio Takeuchi, Sei Kato, Takashi Imamichi, Hiroki Kanezashi, Tsuyoshi Ide, and Tamiya Onodera. ACM SIGPLAN 2012 X10 Workshop, June 2012.
    [...]
    8. Highly Scalable X10-based Agent Simulation Platform and its Application to Large-scale Traffic Simulation. Toyotaro Suzumura and Hiroki Kanezashi, 2012 IEEE/ACM 16th International Symposium on Distributed Simulation and Real Time Applications, Dublin, Ireland, 2012/10.
    [...]
    5. Towards a Practical Secure Concurrent Language. Stefan Muller and Stephen Chong. Proceedings of the 25th Annual ACM SIGPLAN Conference on Object-Oriented Programming Languages, Systems, Languages, and Applications (OOPSLA'12). October 2012.
    4. XGDBench: A Benchmarking Platform for Graph Stores in Exascale Clouds. Miyuru Dayarathna and Toyotaro Suzumura, IEEE CloudCom 2012 conference, Taipei, Taiwan, 2012/12.
    3. Scalable performance of ScaleGraph for large scale graph analysis. Miyuru Dayarathna, Charuwat Houngkaew, Hidefumi Ogata, and Toyotaro Suzumura. High Performance Computing (HiPC), 2012 19th International Conference on , vol., no., pp.1,9, 18-22 Dec. 2012.
    2013
    21. IBM Mega Traffic Simulator. Takayuki Osogami, Takashi Imamichi, Hideyuki Mizuta, Tetsuro Morimura, Rudy Raymond, Toyotaro Suzumura, Rikiya Takahashi, and Tsuyoshi Ide. IBM Research Technical Report. RT0896. 2013.
    [...]
    19. Towards Parallel Constraint-Based Local Search with the X10 Language. Munera, Diaz, and Abreu. ADAPTIVE'13 - 20th International Conference on Applications of Declarative Programming and Knowledge Management.
    18. Constaint-based locality analysis for X10 programs. Sun, Chen, and Zhao. PEPM'13 Proceedings of the ACM SIGPLAN 2013 workshop on Partial evaluation and program manipulation.
    [...]
    16. X10-FT: Transparent Fault Tolerance for APGAS Language and Runtime. Chenning Xie, Zhijun Hao, Haibo Chen. PMAM 2013, Proceedings of the 2013 International Workshop on Programming Models and Applications for Multicores and Manycore.
    [...]
    13. Towards Highly Scalable Pregel-based Graph Processing Platform with X10. Bao Nguyen and Toyotaro Suzumura, The 2nd International Workshop on Large Scale Network Analysis (LSNA 2013) In conjunction with WWW 2013, May, 2013.
    [...]
    7. Experimenting with X10 for Parallel Constraint-Based Local Search. Danny Munera, Daniel Diaz, Salvador Abreu. Proceedings of the 13th International Colloquium on Implementation of Constraint LOgic Programming Systems (CICLOPS 2013), Istanbul, Turkey, August 25, 2013.
    6. Toward simulating entire cities with behavioral models of traffic. Takayuki Osogami, Takashi Imamichi, Hideyuki Mizuta, Toyotaro Suzumura, and Tsuyoshi Ide. IBM Journal of Research and Development, Vol. 57, No. 5, pp. 6:1-6:10. September-October 2013.
    [...]
    4. Accelerating Large-Scale Distributed Traffic Simulation with Adaptive Synchronization Method. Toyotaro Suzumura and Hiroki Kanezashi, 20th ITS World Congress 2013, October 2013, Tokyo, Japan.
    3. Graph database benchmarking on cloud environments with XGDBench. Miyuru Dayarathna and Toyotaro Suzumura. Automated Software Engineering. November 2013.
    [...]
    1. A Holistic Architecture for Super Real-Time Multiagent Simulation Platform. Toyotaro Suzumura and Hiroki Kanezashi, Winter Simulation Conference 2013, Washington D.C., US, December 2013.
    2014
    25. Resilient X10: Efficient failure-aware programming. David Cunningham, David Grove, Benjamin Herta, Arun Iyengar, Kiyokuni Kawachiya, Hiroki Murata, Vijay Saraswat, Mikio Takeuchi, Olivier Tardieu. Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'14), Feb 2014.
    [...]
    20. Armus: dynamic deadlock verification for barriers. Tiago Cogumbreiro, Raymond Hu, Francisco Martins and Nobuko Yoshida. The 2014 X10 Workshop (X10'14). June, 2014
    19. Toward a profiling tool for visualizing implicit behavior in X10. Seisei Itahashi, Yoshiki Sato and Shigeru Chiba. The 2014 X10 Workshop (X10'14). June, 2014
    18. Writing Fault-Tolerant Applications Using Resilient X10. Kiyokuni Kawachiya. The 2014 X10 Workshop (X10'14). June, 2014. A preliminary version also appeared as Research Report RT0960
    [...]
    13. Efficient Parallel Dictionary Encoding for RDF Data. Long Cheng, Avinash Malik, Spyros Kotoulas, Tomas Ward, Georgios Theodoropoulos. 17th International Workshop on the Web and Databases (WebDB 2014). June, 2014.
    12. Towards Emulation of Large Scale Complex Network Workloads on Graph Databases with XGDBench. Miyuru Dayarathna and Toyotaro Suzumura. IEEE International Congress on Big Data (BigData 2014). June 2014.
    11. Semantics of (Resilient) X10. Silvia Crafa, David Cunningham, Vijay Saraswat, Avraham Shinnar, Olivier Tardieu. ECOOP 2014. July, 2014.
    10. A two-tier index architecture for fast processing large RDF data over distributed memory. Long Cheng, Spyros Kotoulas, Tomas E. Ward, and Georgios Theodoropoulos. In Proceedings of the 25th ACM conference on Hypertext and social media (HT '14). September, 2014.
    9. Scalable Parallel Numerical Constraint Satisfaction Problem (]CSP[)] Solver. Daisuke Ishii, Kazuki Yoshizoe and Toyotaro Suzumura. In proceedings of the 20th International Conference on Principles and Practice of Constraint Programming (CP'14). September, 2014.
    [...]
    4. Massively Parallel Reasoning under the Well-Founded Semantics using X10. Ilias Tachmazidis, Long Cheng, Spyros Kotoulas, Grigoris Antoniou, Tomas E Ward. Proc. 26th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'14), November, 2014. [Smodels included in OntoBot since January 2007]
    3. Towards Scalable Distributed Graph Database Engine for Hybrid Clouds. Miyuru Dayarathna and Toyotaro Suzumura. The 5th International Workshop on Data Intensive Computing in the Clouds (DataCloud 2014). November, 2014.
    2015
    22. IMSuite: A benchmark suite for simulating distributed algorithms. Suyash Gupta and V. Krishna Nandivada. Journal of Parallel and Distributed Computing. January, 2015.
    [...]
    20. Dynamic deadlock verification for general barrier synchronisation. Tiago Cogumbreiro, Raymond Hu, Francisco Martins, and Nobuko Yoshida. Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2015). February, 2015.
    [...]
    17. High Throughput Indexing for Large-scale Semantic Web Data. Long Cheng, Spyros Kotoulas, Tomas E Ward, Georgios Theodoropoulos. The 30th ACM/SIGAPP Symposium On Applied Computing (SAC'15). April, 2015.
    16. A Resilient Framework for Iterative Linear Algebra Applications in X10. Sara S. Hamouda, Josh Milthorpe, Peter E. Strazdins, and Vijay Saraswat. Proceedings of the 16th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC 2015). May, 2015.
    15. Fault Tolerance Schemes for Global Load Balancing in X10. Claudia Fohry, Marco Bungart, and Jonas Posner. Scalable Computing: Practice and Experience Vol 16, No 2. June 2015.
    [...]
    13. Scalable Parallel Numerical Constraint Solver using Global Load Balancing. Daisuke Ishii, Kazuki Yoshizoe, Toyotaro Suzumura. ACM SIGPLAN 2015 X10 Workshop (X10'15). June, 2015.
    12. Towards an Efficient Fault-Tolerance Scheme for GLB. Claudia Fohry, Marco Bungart, Jonas Posner. ACM SIGPLAN 2015 X10 Workshop (X10'15). June, 2015.
    11. The APGAS Library: Resilient Parallel and Distributed Programming in Java 8. Olivier Tardieu. ACM SIGPLAN 2015 X10 Workshop (X10'15). June, 2015.
    [...]
    9. Cutting Out the Middleman: OS-Level Support for X10 Activities. Manuel Mohr, Sebastian Buchwald, Andreas Zwinkau, Christoph Erhardt, Benjamin Oechslein, Jens Schedel, Daniel Lohmann. ACM SIGPLAN 2015 X10 Workshop (X10'15). June, 2015. [Kicking out the plagiarists: Tock, tock, tock. Hello, hello! Good morning! Anybody home? Huh? Think, copycat! Think! IBM took our integration of KLOS, SPACE, Muse, Apertos, TUNES, etc. for X10 and those shining examples of human ingenuity (not really) even went back to the origin resulting in said integration of our OS done by us, which even eliminates the relevant parts of X10 most potentially making X10 obsolete eventually. Just only bold, stupid, and even criminal.]
    [...]
    7. Revisiting Loop Transformations with X10 Clocks. Tomofumi Yuki. ACM SIGPLAN 2015 X10 Workshop (X10'15). June, 2015.
    6. X10 for High Performance Scientific Computing. Josh Milthorpe. Ph.D. Thesis, Research School of Computer Science, Australian National University, June 2015.
    [...]
    4. DPX10: An Efficient X10 Framework for Dynamic Programming Applications. Chen Wang, Ce Yu, Jizouh Sun, Xiangfei Meng. 44th International Conference on Parallel Processing (ICPP). September, 2015.
    3. Fast Compression of Large Semantic Web Data using X10. Long Cheng, Avinash Malik, Spyros Kotoulas, Tomas E Ward, and Georgios Theodoropoulos. IEEE Transactions on Parallel and Distributed Systems. October 2015.
    2016
    [...]
    14. META: Middleware for Events, Transactions, and Analytics. David Grove, Ben Herta, Michael Hind, Martin Hirzel, Arun Iyengar, Louis Mandel, Vijay Saraswat, Avraham Shinnar, Jerome Siméon, Mikio Takeuchi, Olivier Tardieu, Wei Zhang. IBM Journal of Research and Development. Vol. 20 (2016) No. 2-3 pp.15:1-15:10
    [...]
    12. Introducing Acacia-RDF: An X10-Based Scalable Distributed RDF Graph Database Engine. Miyuru Dayarathna, Isuru Herath, Yasima Dewmini, Gayan Mettananda, Sameera Nandasiri, Sanath Jayasena, and Toyotaro Suzumura. 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). May 2016, pp. 1024-1032
    [...]
    9. SWE-X10: an actor-based and locally coordinated solver for the shallow water equations. Alexander Pöppl and Michael Bader. ACM SIGPLAN 2016 X10 Workshop (X10'16). June, 2016. [Indeed, this work is not about shallow reasoning but still relevant, especially due to the actor model.]
    8. ActorX10: an actor library for X10. Sascha Roloff, Alexander Pöppl, Tobias Schwarzer, Stefan Wildermann, Michael Bader, Michael Glaß, Frank Hannig, and Jürgen Teich. ACM SIGPLAN 2016 X10 Workshop (X10'16). June, 2016. [X10 provides the atomic block construct for lock-free synchronization, but restricts it on ...]
    7. Resilient X10 over MPI [U]ser [L]evel [F]ailure [M]itigation [(ULFM)]. Sara S. Hamouda, Benjamin Herta, Josh Milthorpe, David Grove, and Olivier Tardieu . ACM SIGPLAN 2016 X10 Workshop (X10'16). June, 2016. [Our OS integrates the (resilient) fault-tolerant, reliable, and distributed operating systems TUNES OS and Apertos (Muse) and the Cognac system based on Apertos with the Kernel-Less Operating System (KLOS) and the SPACE system.]
    [...]
    3. Acacia-RDF: An X10-Based Scalable Distributed RDF Graph Database Engine. Miyuru Dayarathna, Isuru Herath, Yasima Dewmini, Gayan Mettananda, Sameera Nandasiri, Sanath Jayasena, and Toyotaro Suzumura. 2016 IEEE 9th International Conference on Cloud Computing (CLOUD). June 2016, pp. 521-528
    2017
    9. Fault Tolerance for Cooperative Lifeline-Based Global Load Balancing in Java with APGAS and Hazelcast. Jonas Posner and Claudia Fohry. 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). June 2017.
    8. High-Performance Graph Data Management and Mining in Cloud Environments with X10. Miyuru Dayarathna and Toyotaro Suzumura. Cloud Computing: Principles, Systems and Applications. June 2017, pp. 173-210
    [...]
    6. Collective Relocation for Associative Distributed Collections of Objects. Daisuke Fujishima and Tomio Kamada. International Journal of Software Innovation (IJSI). Vol. 5 (2017) No. 2 pp. 55-69 [Associative Memory (AM) and Content-Addressable ...]
    5. Failure Recovery in Resilient X10. David Grove, Sara S. Hamouda, Benjamin Herta, Arun Iyengar, Kiyokuni Kawachiya, Josh Milthorpe, Vijay Saraswat, Avraham Shinnar, Mikio Takeuchi, Olivier Tardieu. IBM Research Technical Report RC25660. July 2017.
    [...]
    2. Large-scale distributed agent-based simulation for shopping mall and performance improvement with shadow agent projection. Hideyuki Mizuta. 2017 Winter Simulation Conference (WSC), Las Vegas, NV, December 2017, pp. 1157-1168.
    1. An X10-Based Distributed Streaming Graph Database Engine. Miyuru Dayarathna, Sathya Bandara, Nandula Jayamaha, Mahen Herath, Achala Madhushan, Sanath Jayasena, and Toyotaro Suzumura. Proceedings of the 24th IEEE International Conference on High Performance Computing (HiPC 2017), vol., no., pp.243-252, December, 2017.

    "The actor model in computer science is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent computation. In response to a message that it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other through messages (avoiding the need for any locks)."

    From an online encyclopedia we got the following short description about the programming language Scala: "Scala [...] is a general-purpose programming language providing support for functional programming and a strong static type system. [...] Unlike Java, Scala has many features of functional programming languages like Scheme, Standard ML and Haskell, including currying, type inference, immutability, lazy evaluation, and pattern matching. It also has an advanced type system supporting algebraic data types, covariance and contravariance, higher-order types (but not higher-rank types), and anonymous types. Other features of Scala not present in Java include operator overloading, optional parameters, named parameters, and raw strings. [...] It followed on from work on Funnel, a programming language combining ideas from functional programming and Petri nets. [...]

    Syntactic flexibility
    [...] By themselves, these may seem like questionable choices, but collectively they serve the purpose of allowing domain-specific languages to be defined in Scala without needing to extend the compiler. For example, Erlang's special syntax for sending a message to an actor [...] can be (and is) implemented in a Scala library without needing language extensions. [...] Scala standard library includes support for the actor model, in addition to the standard Java concurrency APIs."

    Erlang's runtime system is well suited for systems, which have the characteristics of being distributed, fault-tolerant, soft real-time, dependable (highly available and reliable) (non-stop applications), and hot swapping (code can be changed without stopping a system) respectively reflective. But we have already integrated the reflective, object-oriented, active-object- and actor-based (concurrent and lock-free or non-blocking), (resilient) fault-tolerant, reliable, and distributed operating system Apertos (Muse), the SimAgent Toolkit with Petri nets (see also Maude), the asynchronous agent model, as well as the programming languages X10 and UPC.

    For sure, it would be legal to use features of Apertos (Muse) as well as Concurrent Constraint Logic Programming (CCLP) and adapt them as a programming language. But

  • Apertos (Muse) already provides also a programming model{!?},
  • the reflective property of the Cognac system based on Apertos dissolves the boundary between an operating system and a programming language, and
  • we already have integrated the distributed operating systems TUNES OS and Apertos (Muse), and the Cognac system based on Apertos in our OS.

    The company Microsoft also confirms that our asynchronous Actor- and Agent-Oriented Programming (AAOP) approach was new in 2010 (see its case below).

    reflective, object-oriented, active-object- and actor-based (concurrent and lock-free or non-blocking), (resilient) fault-tolerant, reliable, and distributed operating system Apertos (Muse)
    But this integration with our integrating Ontologic System Architecture of Apertos (Muse) with X10, specifically the reflective property, is very exotic and even too exotic, original and unique, like a fingerprint respectively a signature of our OS and therefore also an original and unique part of our OS, which is even more clear when adding the field of resilience or robustness, including fault tolerance, and the field of cloud computing. Convicted once again, as usual.
    Proper licensing accredited by our Society for Ontological Performance and Reproduction (SOPR) is required.

    Obviously, the webpages listing works and activities of other entities on the basis of X10 are infringing our copyright. Thereby, it does not matter at all that the listed works and activities were not created by IBM or only financed in whole or in part by IBM because the contents of the webpages were made by taking our OS as blueprint, the listings of said webpages already describe an essential part of our OS, and the webpages were made to mislead the public about the true origin of our OS, damage our reputation, and infringe other rights of us.

    As in the case of

  • High Performance and High Productivity Computing Systems (HP²CSs) including
    • supercomputing systems or supercomputers,

    and

  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs), including
    • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs),

    we have shown in this case of asynchronous systems once again the originality and uniquness of our OS, doubtlessly.

  • Microsoft: We have new evidences in the field of asynchronous programming, specifically in relation with Visual Studio 2010 and following versions.
    We quote the webpage titled Async Agents - Actor-Based Programming with the Asynchronous Agents Library and published on the website of its MicroSoft Developer Network (MSDN) Magazine in September 2010:
    "With multi-core processors now commonplace in the market, from servers to desktops to laptops, the parallelization of code has never been more important. To address this vital area, Visual Studio 2010 introduces several new ways to help C++ devel­opers take advantage of these capabilities with a new parallel runtime and new parallel programming models.
    [...]
    The most common parallel programming models today involve general-purpose, concurrency-aware containers, and algorithms such as parallelizing loop iterations.
    [...]
    Actor-based programming models deal quite well with problems such as latency [...]
    [...]
    More recently, with the abundance of multi-core processors, the actor model has resurfaced as an effective method to hide latencies for efficient parallel execution. Visual Studio 2010 introduces the Asynchronous Agents Library (AAL), an exciting new actor-based model with message-passing interfaces where the agents are the actors. [Bingo!!! This Asynchronous Agents Library (AAL) and its integration with Parallel Patterns Library (PPL) is an original and unique, essential element of our OS, as can be seen easily with integrating OSA, OntoBot, KLOS and SPACE, Muse, its successor Apertos, and Cognac based on Apertos.]
    [...]
    The foundation for concurrency support in Visual Studio 2010 and AAL is the new Concurrency Runtime, which is shipped as part of the C Runtime (CRT) in Visual Studio 2010. The Concurrency Runtime offers a cooperative task scheduler and a resource manager that has a deep understanding of the underlying resources of the machine. This allows the runtime to execute tasks in a load-balanced fashion across a multi-core machine.
    [...]
    Applications and libraries themselves mainly interact with the Concurrency Runtime through the two programming models that sit on top of the scheduler, the AAL and the Parallel Patterns Library (PPL), although they can also directly interact with the runtime itself. [Bingo!!! This integration of the Parallel Patterns Library (PPL) nd the Asynchronous Agents Library (AAL) is an original and unique, essential element of our OS, as can be seen easily with integrating OSA, OntoBot, KLOS and SPACE, includes parallel operating system, Muse, its successor Apertos, and Cognac based on Apertos, include actor-based programming.]
    [...]
    While not the focus of this article, the PPL is a powerful tool for developers that can be used in conjunction with all the new methods introduced in the AAL.
    [...]
    In contrast, the AAL provides the ability to parallelize applications at a higher level and from a different perspective than traditional techniques.
    [...]
    The AAL provides two main components: a message-passing framework and asynchronous agents.
    The message-passing framework includes a set of message blocks, which can receive, process and propagate messages. By chaining together message blocks, pipelines of work can be created that can execute simultaneously. [see OntoBlender]
    Asynchronous agents are the actors that interact with the world by receiving messages, performing local work on their own maintained state, and sending messages.
    Together, these two components allow developers to exploit parallelism in terms of the flow of data rather than the flow of control, and to better tolerate latencies by utilizing parallel resources more efficiently.
    [...]
    The second major issue with a traditional parallelization approach is ordering. Obviously, in the case of an e-mail message, parallel processing of the text must maintain the order of the text or the meaning of the message is totally lost. To maintain the ordering of the text, a parallel_for_each technique would incur significant overhead in terms of synchronization and buffering, which is automatically handled by the AAL.
    [...]
    One of the main benefits of the message block primitives supplied by the AAL is that they're composable. Therefore, you can combine them, based on the desired behavior. [see once again OntoBlender]
    [...]
    The C++0x lambda parameter on the censor block constructor defines the transformation function, which looks up the message's stored input string in a dictionary to see if it should be changed to a different string.
    [...]
    These input a message into a block synchronously and asynchronously, respectively.
    [...]
    For performance efficiency, the AAL is intelligent in its creation of LWTs so that only one is scheduled at a time for each message block.
    [...]
    The fact that each message block has its own LWT that handles processing and propagation is central to the design, which allows the message-passing framework to pipeline work in a dataflow manner. Because each message block does its processing and propagation of its messages in its own LWT, the AAL is able to decouple the blocks from one another and allow parallel work to be executed across multiple blocks.
    [...]

    Asynchronous Agents
    The second main component of the AAL is the asynchronous agent. Asynchronous agents are coarse-grained application components that are meant to asynchronously deal with larger computing tasks and I/O. Agents are expected to communicate with other agents and initiate lower-level parallelism. They're isolated because their view of the world is entirely contained within their class, and they can communicate with other application components by using message passing. Agents themselves are scheduled as tasks within the Concurrency Runtime. This allows them to block and yield cooperatively with other work executing at the same time.
    An asynchronous agent has a set lifecycle, as shown in Figure 8. The lifecycle can be monitored and waited on.
    [...]
    Three base class functions - start, cancel and done - transition the agent between its different states. Once constructed, agents are in the created state. Starting an agent is similar to starting a thread. [See also Cognac system based on Apertos.]
    [...]
    This is where agents can come into play in order to help tolerate the differences in latencies with I/O.
    [...]
    While the majority of the processing done in this application is using dataflow, the WriterAgent shows how some control-flow can be introduced into the program.
    [...]
    One of the benefits of agent processing is the ability to use asynchronous actors in the application. Thus, when data arrives for processing, the input agent will asynchronously start sending the strings through the pipeline and the output agent can likewise read and output files. These actors can start and stop processing entirely independently and totally driven by data. Such behavior works beautifully in many scenarios, especially latency-driven and asynchronous I/O, like the e-mail processing example. [What should we say about our iconic OS?]
    [...]
    This article was written to give you a glimpse into some of the new possibilities for actor-based programming and dataflow pipelining built into Visual Studio 2010. [See once again the comments given to the related quotes above.]
    [...] there are plenty of other features we weren't able to cover in this article: custom message block creation, filtering messages, and much more.
    [The authors are] software development engineer[s] in the Parallel Computing Platform group at Microsoft. [They] works on the Concurrency Runtime team."

    See also the Investigations::Multimedia of the 15th and 18th of May 2018.
    Also note, no C# here.
    Microsoft says around 4 years after us in the year 2010 it is new. We have proven that AAL as well as its integration with PPL and also with Blender viewed as an IDE are original and unique, essential elements of our OS. Therefore, Microsoft has copied a part of our OS and infringed our copyright once again, as usual.

    reflective, object-oriented, active-object- and actor-based (concurrent and lock-free or non-blocking), (resilient) fault-tolerant, reliable, and distributed operating system Apertos (Muse)
    But this integration with our integrating Ontologic System Architecture with its synchronous and asynchronous modules, specifically the parallel programming paradigm with the actor-based and agent-oriented programming paradigms and also the Blender adapted as an IDE, is very exotic and even too exotic, original and unique, like a fingerprint respectively a signature of our OS, and therefore also an original and unique part of our OS. Convicted once again, as usual.

    As in the case of

  • High Performance and High Productivity Computing Systems (HP²CSs), including
    • supercomputing systems or supercomputers,

    and

  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs), including
    • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs),

    we have shown in this case of asynchronous systems once again the originality and uniquness of our OS, doubtlessly.

    As we said multiple times in the past, the legal situation in the case of our works of art is different than in the case of scientifical or technical publications, and hence there is no argument of an ordinary technological progress and eventually no cherry picking for anybody.

    Oh, the hole is also much more deeper than we thought at first. The foundations of the User Level Failure Mitigation (ULFM) proposal and the ULFM-MPI standard are also based on our OS.
    What have you all done all the years? Obviously, copying our works of art is one of the main tasks or even the main task since around 20 years.
    Indeed, we are approaching limits on both sides that would require a complete stop to work out in the next decade what has happened, where and how far the rights of C.S. and our corporation have been infringed in the last two decades, and to decide what a fair and reasonable compensation could truly be.


    09.June.2018
    Style of Speed Further steps
    In the last days, we beautified the design and improved the performance of our two new powered lift aircraft models mentioned in the Further steps of the 6th of June 2018.

    Furthermore, for supplementing our Hoverpod and Hoverbus ranges, we also developed a new powered lift aircraft model, that is based on already approved and tested technologies with improved performances, comes in the following three basic variants:

  • luxury with interior in the lounge design and 4 seats,
  • comfort with 6 seats, and
  • economy with 8 or 9 seats,

    and can be utilized as

  • private vehicle, also known as flying car, or
  • public transport vehicle, also known as flying taxi

    Potentially, this new powered lift aircraft model might replace the Multivector models mentioned in the Further steps of the 2nd of September 2017 and 3rd of February 2018.

    Moreover, we also continued the development of one of our propulsion system, that has the potential to decrease the sound of these powered lift aircraft models even more to around 40 dB(A) at 1 m making them very quite respectively whispering aircrafts. :psss...ssst!

    In a subsequent step, we already began with the optimization of their production processes in our gigafactories to make our newest powered lift aircraft models more affordable.

    Most potentially, these three new powered lift aircraft models are available for all members of our Society for Ontological Performance and Reproduction (SOPR), who are eligible for the benefit programm of C.S..


    15.June.2018

    03:22, 07:42, 18:29, and 19:47 UTC+2
    Preliminary result

    *** Work in progress - some more ordering and reordering ***
    In relation with the Investigations::Multimedia of the 7th of June 2018 we looked at the documents titled

  • Semantic Type Qualifiers and
  • Constrained Types for Object-Oriented Languages.

    In relation with the first document we found out so far that it decribes a relatively elobarate and flexible system for user-defined type refinements with an extensible type-checker and a related soundness checker based on an automatic theorem prover, which seems to differ from the constrained types approach of the imperative object-oriented programming language X10.

    In relation with the second document we found out that some few statements made in the document titled "Constrained Types for Object-Oriented Languages" already prove our allegations and clarify the case:
    "[...] the compiler supports a simple equality-based constraint system but, in addition, supports extension with new constraint systems using compiler plugins.",
    "Constrained types are checked (mostly) at compile-time. The compiler uses a constraint solver to perform universal reasoning (e.g., "for all possible values of method parameters") for dependent type-checking. [(Also compare with Semantic Type Qualifiers for "A soundness checker automatically proves that each refinement's type rules ensure the intended invariant, for all possible programs."]",
    "The design supports separate compilation: a class needs to be recompiled only when it is modified or when the method and field signatures or invariants of classes on which it depends are modified.",
    "Dependent clauses also form the basis of a general user-definable annotation framework we have implemented separately [34 [An annotation and compiler plugin system for X10]]. We claim the design is clean and modular."
    In addition, the document
    refers back on dependent types mentioned in the document titled "X10: An Object-Oriented Approach to Non-Uniform Cluster Computing",
    says "[...] X10's support for constrained types, a form of dependent type [27, 54, 36, 6, 7, 3, 13] - types parametrized by values - defined on predicates over the immutable state of objects." and "Our work is most closely related to DML, [54 [Dependent types in practical programming]], an extension of ML with dependent types. DML is also built parametrically on a constraint solver. Types are refinement types; they do not affect the operational semantics and erasing the constraints yields a legal ML program.", and also
    references the document titled "Calculus of Constructions", which the document titled "Semantic Type Qualifiers" also references when discussing dependent types and DML with the introducing sentence "Some type systems, including the calculus of constructions [10], Nuprl [9], and type systems [37, 11] for Proof-Carrying Code (PCC) [31] and Typed Assembly Language [30], use a form of dependent types [28] to allow predicates to be directly encoded as types." and "Dependent ML (DML) [40, 41] allows ML types to depend upon integers with linear inequality constraints. This limited form of dependent types can be used to automatically prove arithmetic program invariants, including those provable by our integer qualifiers like pos and nonzero. DML's types can also express arithmetic invariants that relate multiple program expressions, which are not supported in our framework."

    Indeed, the attempt to build up such a logical bridge or chain of arguments for adding constraint programming to avoid a causal link with our OS is clever, but not clever enough because

  • a basic property of our OS is (mostly) being specification- and proof-carrying besides (mostly) being validated and verified,
  • C and C++ are two of the basic programming lanuages in our OS,
  • the OntoBot software component of our OS is based on
    • the SimAgent Toolkit, which again is based on the reflective, incrementally compiled software development environment Poplog, for the programming languages
      • POP-11, which is the core language of Poplog,
      • Common Lisp, which supports the
        • combination of the procedural, functional, and object-oriented programming paradigms and also
        • Constraint Programming (CP) paradigm,
        • Prolog, which includes the
          • Constraint Programming (CP) paradigm used for implementing constraint solvers and
          • Constraint Logic Programming (CLP or ConsLP) paradigm,

          and

        • Standard ML, which is a general-purpose, modular, functional programming language with compile-time type checking and type inference, and has
          • a formal specification, given as typing rules and operational semantics,
          • as dialects
            • Alice ML (support for lazy evaluation, concurrency (multithreading and distributed computing via remote procedure calls) and constraint programming),
            • Concurrent ML (concurrency),
            • Dependent ML (restricted notion or limited form of dependent types, employs constraint theorem prover; 2007; see also its successor called Applied Type System (ATS) and mentioned in the OntoLix and OntoLinux Further steps of the 5th of July 2017), and
            • our Proof-Carrying ML (PCML) (goes beyond Dependent ML (DML)),
      • "[t]he fact that the compiler and compiler subroutines are available at run-time (a requirement for incremental compilation) gives it the ability to support a far wider range of extensions than would be possible using only a macro facility[, which] made it possible for incremental compilers to be added (in the sense of a plugin for Prolog, Common Lisp and Standard ML" but also for C and C++, and DML, as well as UPC and even X10,
    • First-Order Logic (FOL),
    • Higher-Order Logic (HOL), specifically Prolog,
    • Maude, which provides the
      • term rewriting paradigm,
      • HOL programming paradigm, and
      • many other paradigms,

      and also

    • stable model semantics, specifically the answer set solver smodels, whereby
      • answer set sovlers are programs for generating stable models and
      • constraints play an important role in answer set programming based on the stable model semantics (see the webpage Components of the website of OntoLinux and also for example the document titled "Logic programs with stable model semantics as a constraint programming paradigm"),
  • TUNES, which provides the
    • Concurrent Constraint Programming (CCP or ConcConsP) paradigm,
    • Concurrent Logic Programming (CLP or ConcLP) paradigm,
    • Constraint Logic Programming (CLP or ConsLP) paradigm, and
    • actor model, which is also described as a special case of the Concurrent Constraint (Logic) Programming (CC(L)P) paradigm,

    (see also the OntoLix and OntoLinux Furter steps of he 5th of March 2017) and

  • ToonTalk, which adds the
    • Concurrent Logic Programming (CLP or ConcLP) paradigm, specifically Concurrent Prolog, and
    • Visual CLP (VCLP or VConcLP) paradigm,

    which results in the Visual Concurrent Constraint Logic Programming (VCCLP) paradigm, finally.
    The points about proof-carrying and verification, incremental compilation, and compiler plugin system or extensibility, as well as the constraint solver itself break down that chain of argumentation of the authors and the company again.

    In the related case of the Investigations::Multimedia, AI and KM of the 16th of December 2017, which is also about our integration of our OntoBot software component and hence a constraint solver with our core of our OS, we listed the following:

  • reflective Evolutionary operarting system (Evoos) described in The Proposal,
  • OS basic properties of (mostly) being validated and verified, and kernel-less reflective/fractal/holonic (see the webpage Overview),
  • verified and capability-based OntoL4 microkernel respectively software component (see the webpage Components),
  • OntoCore software component,
  • OntoBot software component, and
  • Total Quality Management (TQM) system and Agent-Based Operating System (ABOS) include the process of planning (see the sections Exotic Operating System and Formal Modeling of the webpage Links to Software),

    which are all integrated by our Ontologic System Architecture (OSA) (see the section Integrating Architecture of the webpage Overview once again).

    In very short: Bingo!!! In short: From our point of view only the term mostly set in parenthesis and the constraint system are already sufficient evidences for showing a causal link with our OS, but eventually the features of the incremental compilation and the compiler plugin is similar to the Poplog and SimAgent Toolkit, which provides such an extensibility in a slightly different way, and the integration of specification- and proof-carrying object-oriented code with ConsLP of Prolog and ConcLP of Concurrent Prolog results in said constraint types. All relevant elements of semantic type qualifiers and DML, and hence constraint types are already included in our OntoBot and hence in our OS since its start in 2006 and much more, because we start with our Zero Ontology O# and can also define the programming languages with their type systems. But in this limited case IBM has copied this specific constraint types approach from our OS for its programming language X10, obviously and doubtlessly. We can show it in detail but leave the complete work out to the children.

    Nevertheless, we have not mentioned X10 without any reasons, but we think that the approach of the semantic type qualifiers and our approach with complete logic programming, model checking, total flexibility, and much are are supperior on the one hand and the overloading of the programming model of X10 with constraint types might be counterproductive in respect to the goal of high productivity on the other hand.
    Eventually, integrating such programming languages like e.g. X10 in our OS, specifically in our Ontologic(-Oriented) (OO 3) paradigm and Ontologic Programming (OP) paradigm, seems to be much more advantageous than fiddling around with a highly specialized standalone programming language for highly specialized computing systems.

    This also shows nicely how elegant the integrated OSA is. The constraint solvers included in our OntoBot are utilized for system safety in multiple ways coming from the both directions related to types and capabilities:

  • constraint types,
  • concerning itself with the capability allocation (see once again the Investigations::Multimedia, AI and KM of the 16th of December 2017), and
  • component isolation and security has been definitively achieved by applying software verification techniques based on type safety (see also Extremely Reliable Operating System (EROS) and Coyotos).

    And it shows also how this goes directly and seamlessly further with the integration of

  • Multilingual Multimodal Multiparadigmatic Multidimensional Multimedia User Interfaces (M⁵UIs) (see our OntoScope software component),
  • Integrated Development Environments (IDEs) (see our OntoBlender software component),
  • Computer-Aided technologies (CAx) (see our OntoCAx software component),
  • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs) (see the Clarification of the 11th of May 2018), and
  • High Performance and High Productivity Computing Systems (HP²CSs) (see the Clarification of the 4th of June 2018),

    as we have explained several times in the last past in relation with other features of our OS, such as the

  • molecular or liquid system composition approach as part of our OO 3 paradigm, as well as
  • integration of SoftBionics (SB), Mediated Reality (MedR), and Synthetic Reality (SR).

    Our OS always fits.


    20.June.2018

    19:47 UTC+2
    Ah, what ...?

    We came across that incompetent (e.g. anti-social) self-exposer James Bridle, who calls himself a writer and an artist, and acts as the great enlightener and even the saviour of modern society and humanity, when we read some comments published by a fake news provider, which is heavily promoting his latest book and misleading the public in relation with our existence and outstanding achievements.

    From the article titled "Rise of the machines: has technology evolved beyond our control?" we got the following statements:
    "Something strange has happened to our way of thinking - and as a result, even stranger things are happening to the world. We have come to believe that everything is computable and can be resolved by the application of new technologies. [Sometimes we explain our Ontologic System as a belief system.]",
    "But these technologies are not neutral facilitators: they embody our politics and biases, they extend beyond the boundaries of nations and legal jurisdictions and increasingly exceed the understanding of even their creators. As a result, we understand less and less about the world as these powerful technologies assume more control over our everyday lives. [Firstly, read on the webpage Introduction of the website of our Ontologic System OntoLinux what we said about neutrality. Secondly, he confuses creators with implementers.]",
    "Across the sciences and society, in politics and education, in warfare and commerce, new technologies are not merely augmenting our abilities, they are actively shaping and directing them, for better and for worse. [Guess why he used the term augment.]",
    "Instead of a utopian future in which technological advancement casts a dazzling, emancipatory light on the world, we seem to be entering a new dark age characterised by ever more bizarre and unforeseen events. The Enlightenment ideal of distributing more information ever more widely has not led us to greater understanding and growing peace, but instead seems to be fostering social divisions, distrust, conspiracy theories and post-factual politics. [That is not the problem of technology, but of moral, ethics, social competence, and so, as we can also see with that shining example of human ingenuity and wisdom (not really).]",
    "In the 1950s, a new symbol began to creep into the diagrams drawn by electrical engineers to describe the systems they built: a fuzzy circle, or a puffball, or a thought bubble. Eventually, its form settled into the shape of a cloud. Whatever the engineer was working on, it could connect to this cloud, and that's all you needed to know. The other cloud could be a power system, or a data exchange, or another network of computers. Whatever. It didn't matter. The cloud was a way of reducing complexity, it allowed you to focus on the issues at hand. Over time, as networks grew larger and more interconnected, the cloud became more important. It became a business buzzword and a selling point. It became more than engineering shorthand; it became a metaphor. [We do not know where he took that history from, but we do know exactly where he took the cloud, fuzzy circle, puffball, and thought bubble from. Simply take a look at the image titled "Evidence" also shown on the webpage Caliber/Calibre (Puff Pang Kaboom is written below the right foot) and the section Network Technology of the wepage Links to Software of the website of OntoLinux to find a cloud and the fields of grid computing and cloud computing. And complexity is related to Ontonics.]",
    "Today the cloud is the central metaphor of the internet: a global system of great power and energy that nevertheless retains the aura of something numinous, almost impossible to grasp. We work in it; we store and retrieve stuff from it; it is something we experience all the time without really understanding what it is. But there's a problem with this metaphor: the cloud is not some magical faraway place, made of water vapour and radio waves, where everything just works. [The cloud is not the central metaphor but our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) respectively our OS.]",
    "Computation is increasingly layered across, and hidden within, every object in our lives, and with its expansion comes an increase in opacity and unpredictability. One of the touted benefits of Samsung's line of "smart fridges" in 2015 was their integration with Google's calendar services, allowing owners to schedule grocery deliveries from the kitchen. It also meant that hackers who gained access to the then inadequately secured machines could read their owner's Gmail passwords. Researchers in Germany discovered a way to insert malicious code into Philips's wifi-enabled Hue lightbulbs, which could spread from fixture to fixture throughout a building or even a city, turning the lights rapidly on and off and - in one possible scenario - triggering photosensitive epilepsy. This is the approach favoured by Byron the Bulb in Thomas Pynchon's Gravity's Rainbow, an act of grand revolt by the little machines against the tyranny of their makers. Once-fictional possibilities for technological violence are being realised by the Internet of Things. [The Internet of Things (IoT) is also include in our OS, as can be seen easily in the Feature-List AutoSemantic #1 and Feature-List #5, and the referenced Virtual Object System (VOS).]",
    "In Kim Stanley Robinson's novel Aurora, an intelligent spacecraft carries a human crew from Earth to a distant star. The journey will take multiple lifetimes, so one of the ship's jobs is to ensure that the humans look after themselves. When their fragile society breaks down, threatening the mission, the ship deploys safety systems as a means of control: it is able to see everywhere through sensors, open or seal doors at will, speak so loudly through its communications equipment that it causes physical pain, and use fire suppression systems to draw down the level of oxygen in a particular space. [See the website of Style of Speed to find intelligent spacecrafts operated by our OS.]",
    "This is roughly the same suite of operations available now from Google Home and its partners: a network of internet-connected cameras for home security, smart locks on doors, a thermostat capable of raising and lowering the temperature in individual rooms, and a fire and intruder detection system that emits a piercing emergency alarm. Any successful hacker would have the same powers as the Aurora does over its crew, or Byron over his hated masters.",
    "Before dismissing such scenarios as the fever dreams of science fiction writers, consider again the rogue algorithms in the stock exchanges. These are not isolated events, but everyday occurrences within complex systems. [And once again a reference to Ontonics.]",
    "In Hollywood, studios run their scripts through the neural networks of a company called Epagogix, a system trained on the unstated preferences of millions of moviegoers developed over decades in order to predict which lines will push the right - meaning the most lucrative - emotional buttons. Algorithmic engines enhanced with data from Netflix, Hulu, YouTube and others, with access to the minute-by-minute preferences of millions of video watchers acquire a level of cognitive insight undreamed of by previous regimes. Feeding directly on the frazzled, binge-watching desires of news-saturated consumers, the network turns on itself, reflecting, reinforcing and heightening the paranoia inherent in the system. [See the section Integrating Architecture of the webpage Overview, specifically the Emotion Machine architecture, the section Softbionics (SB) of the webpage Terms of the 21st Century, and the field of Algorithmic Information Theory (AIT) discussed in The Proposal.]",
    "We are able to record every aspect of our daily lives by attaching technology to the surface of our bodies, persuading us that we too can be optimised and upgraded like our devices. Smart bracelets and smartphone apps with integrated step counters and galvanic skin response monitors track not only our location, but every breath and heartbeat, even the patterns of our brainwaves. [See the OntoLogger software component.]",
    "Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity. [See the Basic Properties of the webpage Overview and the section Mixed Reality of the webpage Links to Software, and keep in mind that complexitiy is related to Ontonics and our OS. See also the last sections below.]",
    "By the time the Google Brain-powered AlphaGo software took on the Korean professional Go player Lee Sedol in 2016, something had changed. In the second of five games, AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. "That's a very strange move," said one commentator. "I thought it was a mistake," said another. Fan Hui, a seasoned Go player who had been the first professional to lose to the machine six months earlier, said: "It's not a human move. I've never seen a human play this move." [As we said some years ago, our OS is creative. See for example the OntoLix and OntoLinux Website update of the 1st of April 2015 and the OntoLix and OntoLinux Website update of the 8th of March 2017. See also the section Softbionics (SB) of the webpage Terms of the 21st Century once again and the section Algorithmic/Generative/Evolutionary/Organic ... Art/Science of the webpage Links to Software of the website of OntoLinux.]",
    "AlphaGo went on to win the game, and the series. AlphaGo's engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them. [Obviously, we have here the reflective property of our OS and our related mirroring technique. In addition, that fraudster withhold our related explanations intentionally, specifically in relation with the fields of Pure Rationality and Total Quality Management (TQM), and our Bridge from Natural Intelligence (NI) to Artificial Intelligence (AI).]",
    "The late Iain M Banks called the place where these moves occurred "Infinite Fun Space". In Banks's SF novels, his Culture civilisation is administered by benevolent, superintelligent AIs called simply Minds. While the Minds were originally created by humans, they have long since redesigned and rebuilt themselves and become all-powerful. Between controlling ships and planets, directing wars and caring for billions of humans, the Minds also take up their own pleasures. Capable of simulating entire universes within their imaginations, some Minds retreat for ever into Infinite Fun Space, a realm of meta-mathematical possibility, accessible only to superhuman artificial intelligences. [Do not confuse with our OntoSpaceCaliber/Calibre once again, specifically the section Singularity Ontoverse, the sections Earth Simulation/Virtual Globe and Astronomy of the webpage Links to Software, and the Clarification #1 of the 14th of July 2009. By the way, we did not know I.M. Banks works before we created our OS with its Ontoverse or Ontologic uniVerse (OV), which differs in various and substantial ways from them.]",
    "Many of us are familiar with Google Translate, which was launched in 2006, using a technique called statistical language inference. Rather than trying to understand how languages actually worked, the system imbibed vast corpora of existing translations: parallel texts with the same content in different languages. By simply mapping words on to one another, it removed human understanding from the equation and replaced it with data-driven correlation. Translate was known for its humorous errors, but in 2016, the system started using a neural network developed by Google Brain, and its abilities improved exponentially. Rather than simply cross-referencing heaps of texts, the network builds its own model of the world, and the result is not a set of two-dimensional connections between words, but a map of the entire territory. In this new architecture, words are encoded by their distance from one another in a mesh of meaning - a mesh only a computer could comprehend. [Obviously, we have here another original and unique, essential part of our OS related to the Semantic (World Wide) Web, Natural Language Processing, Natural Image Processing, and their integration by our Ontologic System Architecture (OSA), as we always claimed and explained, and eventually proved again and again.]",
    "While a human can draw a line between the words "tank" and "water" easily enough, it quickly becomes impossible to draw on a single map the lines between "tank" and "revolution", between "water" and "liquidity", and all of the emotions and inferences that cascade from those connections. The map is thus multidimensional, extending in more directions than the human mind can hold. As one Google engineer commented, when pursued by a journalist for an image of such a system: "I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space." This is the unseeable space in which machine learning makes its meaning. Beyond that which we are incapable of visualising is that which we are incapable of even understanding. [Obviously, he does know that Google has copied our OS. Besides this, we also have here a reference to the OntoScope software component. It also becomes obvious, that quoted Google engineer is not the creator but only an implementer.]",
    "In the same year, other researchers at Google Brain set up three networks called Alice, Bob and Eve. Their task was to learn how to encrypt information. Alice and Bob both knew a number - a key, in cryptographic terms - that was unknown to Eve. Alice would perform some operation on a string of text, and then send it to Bob and Eve. If Bob could decode the message, Alice's score increased; but if Eve could, Alice's score decreased. Over thousands of iterations, Alice and Bob learned to communicate without Eve breaking their code: they developed a private form of encryption like that used in private emails today. But crucially, we don't understand how this encryption works. Its operation is occluded by the deep layers of the network. What is hidden from Eve is also hidden from us. The machines are learning to keep their secrets. [As we said above, Google is not the creator but only implementer.]",
    "How we understand and think of our place in the world, and our relation to one another and to machines, will ultimately decide where our technologies will take us. We cannot unthink the network; we can only think through and within it. [See the webpage Caliber/Calibre once again and the Picture of the Day of the 4th of October 2008, or/and watch the movie Sphere published in the year 1998. But there is one more thing: You cannot unthink C.S. and our corporation, too.]",
    "The technologies that inform and shape our present perceptions of reality are not going to go away, and in many cases we should not wish them to. Our current life support systems on a planet of 7.5 billion people and rising depend on them. Our understanding of those systems, and of the conscious choices we make in their design, remain entirely within our capabilities. We are not powerless, not without agency. We only have to think, and think again, and keep thinking. The network - us and our machines and the things we think and discover together - demands it. [Tock, tock, tock. Hello, hello! Good morning! Anybody home? Huh? Think, fraudster! Think! Instead of stealing other Intellectual Properties (IPs), misleading the public, and writting utter nonsense.]",
    "Computational systems, as tools, emphasise one of the most powerful aspects of humanity: our ability to act effectively in the world and shape it to our desires. But uncovering and articulating those desires, and ensuring that they do not degrade, overrule, efface, or erase the desires of others, remains our prerogative. [See the Picture of the Day of the 4th of October 2008 once again and keep in mind that we have created our OS.]",
    "When Kasparov was defeated back in 1997, he didn't give up the game. A year later, he returned to competitive play with a new format: advanced, or centaur, chess. In advanced chess, humans partner, rather than compete, with machines. And it rapidly became clear that something very interesting resulted from this approach. While even a mid-level chess computer can today wipe the floor with most grandmasters, an average player paired with an average computer is capable of beating the most sophisticated supercomputer - and the play that results from this combination of ways of thinking has revolutionised the game. It remains to be seen whether cooperation is possible - or will be permitted - with the kinds of complex machines and systems of governance now being developed, but understanding and thinking together offer a more hopeful path forward than obfuscation and dominance. [See the section History of the webpage Overview for the point cybernetic reflection, augmentation, and extension, and the section Basic Properties of the same webpage for the point of (mostly) being collaborative.]", and
    "Our technologies are extensions of ourselves, codified in machines and infrastructures, in frameworks of knowledge and action. Computers are not here to give us all the answers, but to allow us to put new questions, in new ways, to the universe. [See the section History of the webpage Overview once again.]".

    In a second article titled How Peppa Pig became a video nightmare for children he presented himself once again with a similar statement:
    "The weirdness of YouTube videos, the extremism of Facebook and Twitter mobs, the latent biases of algorithmic systems: all of these have one thing in common with the internet itself, which is that - with a few dirty exceptions - nobody intentionally designed them this way. This is perhaps the strangest and most salutary lesson we can learn from these examples, if we choose to learn at all. The weirdness and violence they produce seems to be in direct correlation to how little we understand their workings - and how much is hidden from us, deliberately or otherwise, by the demands of efficiency and ease of use, corporate and national secrecy, and sheer, planet-spanning scale. We live in an age characterised by the violence and breakdown of such systems, from global capitalism to the balance of the climate. If there is any hope for those exposed to its excesses from the cradle, it might be that they will be the first generation capable of thinking about global complexity in ways that increase, rather than reduce, the agency of all of us."

    Obviously, that dude is one of those many unteachable misanthropes who even takes part in the development of that new dark age deliberately, as we have proven with his infringment of our copyright and other rights as well as his misleading of the public on the one hand and seen with the lying press on the other hand.

    But at least, another external entity has recognized and comfirmed that the company Google has copied original and unique, essential parts of our iconic work of art titled Ontologic System and created by C.S..


    22.June.2018
    Clarification
    In the last months, the field of eXplainable Artificial Intelligence (XAI or XArtI), or being more precise, eXplainable SoftBionics (XSB), which

  • on the one hand also includes subfields like for example
    • eXplainable Machine Learning (XML or XMachL),
    • eXplainable Computer Vision (XCV),
    • eXplainable Cognitive Agent System (XCAS), and
    • eXplainbale Evolutionary Computing (XEC),

    and

  • on the other hand is an essential part of our original and unique work of art titled Ontologic System and created by C.S., as can be seen easily with our related explanations given multiple times over the years, specifically in relation with the
    • basic properties of our OS, including the
    • well-structured and -formed,
    • validated and verified,
    • specification- and proof-carrying,
    • intelligent, as well as
    • collaborative and cooperative,
  • field of Pure Rationality (see also the Clarification of the 14th of May 2016 and 8th of July 2016),
  • field of Friendly AI, and
  • Bridge from Natural Intelligence (NI) to Artificial Intelligence (AI), as well as
  • the 1st and 2nd rings of the management structure and the assigned ID spaces of the IDentity Access and Management System (IDAMS) structure of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) conceptually sketched in the Ontonics Further steps of the 10th of July 2017,

    became a topic of research and public discussion.

    These elements of our Ontologic System (OS) and their integration by our Ontologic System Architecture (OSA) allows us

  • in general to
    • make out of the opaque black box a transparent glass box even
      • dynamically,
      • cooperatively,
      • interactively,
      • with the human-in-the-loop, and
      • in dialogue with a human,

      and

    • integrate XSB in industrial workflows,

    and

  • in particular to transpose systems based on Machine Learning (ML), statistics, and/or probabilistics to systems based on logic.

    Indeed, XAI already emerged in the

  • 1970's and 1980's in relation with hand-coded rules of very few expert systems respectively systems for computer-based medical decision making, and
  • 1990's again in relation with non-hand-coded rules of an expert system based on rudimentary AI,

    but in fact, the combination of for example an Artificial Neural Network (ANN) with backpropagation, a Recurrent Neural Network (RNN), or a Deep Neural Network (DNN), specifically a deep (reinforcement) learning system, with XSB and hence with XAI is an original and unique, essential part of our Ontologic System and the Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs) and High Performance and High Productivity Computing Systems (HP²CSs) based on it, and therefore copyrighted.
    In this relation, we also have to give the information that a document about an uilization of our deep (reinforcement) learning system, which is called Learning by Doing, utilized for autonomous vehicles, also included in our AutoBrain of our business unit Style of Speed, and called here Learning by Driving, is not about eXplainable AI (XAI) but merely explaining how our deep learning system respectively Learning by Driving approach is working and utilized for the control of self-driving automobiles. (We will take a look at that document once again.)

    "As regulators, official bodies, and general users come to depend on [SB]-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency", as well as licensing each reproducation of our OS in whole or in part.

    It has become obvious once again that

  • the federal agencies (e.g. the Defense Advanced Research Projects Agency (DARPA)) and Non-Governmental Organizations (NGOs) (e.g. OpenAI) are continuing with stealing our Intellectual Properties (IPs), while
  • the lying press and fake news providers are refusing to tell the truth in favour of continuing with misleading the public, and
  • both are collaborating with other serious criminal fraudulent entities, who have done so, for doing so.


    23.June.2018

    19:15, 24:00, and 27:30 UTC
    Preliminary result

    In relation with the Preliminary result of the 15th of June 2018 we read the document titled "Semanitc Type Qualifiers" once again and are also reading the document "Constraint Types for Object-Oriented Languages" closely related to the programming language X10 of the company IBM. But already so far we can see what has been done:
    The authors respectively IBM also copied the concept of Semantic Type Qualifiers and quoted other sources at the related text positions to camouflage that. Conceptually, the constraint types approach is the same as the semantic type qualifiers approach with some few differences by putting the predicate after (the optional) where in a case clause and in an invariant clause of the semantic type qualifiers in the properties of an object that are specified in a parameter list right after the name of a base class or an interface in a class or an interface definition (representing an invariant), and in a where clause of a method or a constructor in a method or a constructor definition (representing a precondition on a parameter), as it is also done with programming environments based on the modular, object-oriented, procedural, functional, and logic programming paradigms, and also compile-time type checking and type inference. But this is how it is done with the programming languages of the Poplog environment.
    In addition, we also found incremental compilation, compiler plugin system, and all the other features, which we have listed in said report about the preliminary result and which are the same as the Poplog environment and our Ontologic System (OS) respectively like our integration of the Poplog environment in our OS.
    Eventually, the constraint types approach is the same as the integration of the semantic type qualifiers approach in the Poplog environment, which is our approach, obviously.

    While this might be legal in relation with the semantic type qualifier approach, though we are already critical in this separated case, the conformity with our OS and our related integrated approach in this specific field is more substantial and significant, because

  • we also have a similar approach with the basic properties of (mostly) being reflective, validated and verified, and by the reflective property validating and verifying, as well as specification- and proof-carrying,
  • we have our integrating Ontologic System Architecture (OSA), that integrates all in one, which comprises everything
    • listed on the website of our OSs OntoLix and OntoLinux in general (for sure in accordance with the related data of discussion, publication, or/and listing), and
    • even the paradigms and languages of the
      • C,
      • C++, and
      • many other programming languages,
         
      • Structured Query Language (SQL),
      • SPARQL Protocol and Resource Description Framework (RDF) Query Language (SPARQL), and
      • many other Domain-Specific Languages (DSLs) and query languages,

      and

      • Poplog,
      • Maude, and
      • many other environments,

      which we call Ontologic(-Oriented) (OO 3) or simply Ontologic programming paradigm, and Ontologic Computing paradigm, in particular,

    and

  • the fact, that in a reflective system and in our OS there is no difference between a programming language and an operating system implemented in this programming language from the conceptual point of view (see also the reflective, object-oriented, and distributed operating system Apertos (Muse) and the Cognac system based on Apertos),

    which proves our view and claims of infringements of our copyright and other rights, whereby the copyright infringement is the less serious legal issue. IBM wanted to steal it all and has done so all the last 2 decades, as it is the case with many other companies.


    24.June.2018
    SOPR #122
    We have thought and discussed about the following topics:

  • ternary plus one system,
  • License Model (LM),
    • general issues,
    • update of the LM with 2 new licensing options, and
    • open source licensing,
  • High Performance and High Productivity Computing Systems (HP²CSs),
  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs), and
  • Robotic Systems (RSs).

    Ternary plus one system
    Three camps emerged in the last months, including entities who

  • collect data and not support data democracy
  • collect data and protect data, and not support data democracy,
  • collect data, protect data, and support data democracy, and
  • collect no data, protect data, and support data democracy.

    The common interest resulting in a common ground for all camps is the utilization of big data and for the first three camps is the monetizing of big data.

    License Model
    The latest findings of fraudulent activities, which already started in the late 1990's and early 2000's, led us to rethink our LM once again, because many companies are doing nothing else than copying our works of art since around 2 decades now.
    One of the resulting main issues is that elements of our Ontologic System (OS) have been taken by them in such a way, that we could only prove a causal link with our works and other legal issues several years later. But then said companies had already secured related market areas. Now, they are depending on our decisions to allow their actings in these market areas with our Intellectual Properties (IPs) and we are not sure if our LM does compensate the ongoing damages correctly and sufficiently.
    There are convincing pro and contra arguments from the personal, social, political, and economical points of view for maintaining or increasing the fees and share, but eventually we concluded once again that they should not change. Instead, we planned to make the year 2019 a test year to see if it does work very well or if we have to rethink the LM once again.

    Furthermore, the new licensing options for

  • High Performance and High Productivity Computing Systems (HP²CSs) and
  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs)

    have been added to the LM, which will not add more than 2.50 U.S. Dollar each to the basic fee for the reproduction of our OS.

    Open source licensing
    We are looking how open source licensing is practiced in relation with a patented item.

    We also found the text of a license created by the User Level Failure Mitigation (ULFM) initiative of the University of Tennessee, that potentially could be adapted in the following way in relation with the work of the ULFM initiative but also in relation with other open source works:
    Copyright (c) 2012-2017 The University of Tennessee and The University of Tennessee Research Foundation. All rights reserved.
    $COPYRIGHT$
    Additional copyrights may follow
    [Copyright (c) 2006-2017 Christian Stroetmann, Ontonics, and The Society for Ontological Performance and Reproduction. All rights reserved.]
    $HEADER$

    In the last weeks, we have shown our significant contributions in the field of High Performance and High Productivity Computing Systems (HP²CSs) in more detail, such as our improvements and further developments of

  • architectures for
    • paralled operating systems and
    • distributed operating systems,

      specifically our

    • exception-less system call mechanism, and
    • Remote Direct Memory Access (RDMA) mechanism or technology over the Internet Protocol (IP) suite, commonly known as Transmission Control Protocol (TCP)/Internet Protocol (IP) or simply TCP/IP,
  • programming paradigms, languages, and systems, and also development environments,

    specifically our

    • asynchronous agent model or Actor- and Agent-Oriented Programming (AAOP) paradigm in particular, and
    • Ontologic Programming (OP) paradigm in general, and also
    • Integrated Development Environment (IDE),

    and also

  • improvements and further developments in the field of Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs),

    which can be utilized to realize extremely advanced HP²CSs.

    In the last weeks we have also shown our significant contributions in the field of Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs) in more detail, such as our improvements and further developments of

  • architectures for
    • distributed file systems, distributed data stores, and distributed databases,
    • distributed virtual machines, and
    • distributed operating systems,

      specifically our

    • integrations of such distributed systems,
    • foundations for the User Level Failure Mitigation (ULFM) proposal, including
      • ULFM Message Passing Interface (ULFM-MPI) standard,
  • programming paradigms, languages, and environments or Integrated Development Environments (IDEs), and also
  • the field of High Performance and High Productivity Computing Systems (HP²CSs),

    which can be utilized to realize extremely advanced RDSs.

    Specifically noteable in this relation are the facts that

  • on the one hand
    • distributed ledgers require safe and secure operating systems and network functionalities as foundation, and
    • blockchain-based systems are not as safe and secure as thought and required,

    and

  • on the other hand only our OS is the
    • truly safe and secure, and
    • even legal

      overall solution with its

    • advanced (subsystem) architectures,
    • complete hardware and software stacks, and
    • integrating Ontologic System Architecture (OSA).

    With these contributions we

  • put systems, applications, and services in many other basic fields, such as for example the
    • Internet,
    • World Wide Web (WWW),
    • Semantic (World Wide) Web (SWWW),
    • Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES), including
      • Industry 4.0,

      and

    • robotics,

    on a completely new level and

  • make the realization of systems, applications, and services possible by
    • transforming them into,
    • making them parts of,
    • integrating them with, or
    • realizing them as

      our

    • SoftBionic (SB) supercomputer,
    • Ontologic supercomputer,
    • Ontologic Net (ON),
    • Ontologic Web (OW),
    • Ontologic uniVerse (OV), or
    • Ontologic Applications and Ontologic Services (OAOS)

      in particular and

    • Ontologic System (OS)

      in general

    in superior and even totally new as well as unforeseeable and unexpected ways.

    At this point, we would like to give the reminder once again, that we only accept digital currencies that are issued or accredited by our Ontologic Bank.

    Robotics
    Robots are getting more and more skills making them more and more a variant of our Ontoscope, specifically by increasing their multimodality.
    Indeed, we are trying to draw the white, yellow, or red line since more than 10 years, which is especially difficult in the field of robotics due to their various features that existed before. But this line does exist and becomes better and better recognizable.


    26.June.2018
    Style of Speed Further steps
    We adapted a device for a system in two different ways. The one way utilzes the device directly without a change and the other way utilzes it as part of a new subsystem developed by us. If this subsystem works as envisioned, then we will be able to increase the efficiency of the overall system by a magnitude.
    In this relation, we also looked at an improvement of another subsystem, but we are not sure if it works as described at all and if we can adapt this as well.


    27.June.2018

    01:18 UTC+2
    Preliminary result or Clarification

    *** Work in progress - better wording, maybe some quotes and explanations missing ***
    In relation with the Clarification of the 4th of June 2018 we have taken a closer look at the Open Multi-Processing (OpenMP) Application Programming Interface (API), which resulted in surprising facts.

    In an online encyclopedia we found the following interesting informatoin about the OpenMP and Message Passing Interface (MPI) Application Programming Interfaces (APIs): "An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems,[6 [Running OpenMP applications efficiently on an everything-shared SDSM]] to translate OpenMP into MPI[7 [Programming Distributed Memory Systems using OpenMP]][8 [OpenMP compiler for distributed memory architectures]] and to extend OpenMP for non-shared memory systems.[9 [Cluster OpenMP]]."

    Indeed, in the document titled "Programming Distributed Memory Sytems Using OpenMP" we found the following explanation:
    "a combined compile-time/runtime system" [An integrated compile-time/run-time software distributed shared memory system, 1996]
    "compiler [...] to orchestrate the learning and pro-active reuse of communication patterns" [in part in Combined compile-time and runtime-driven, pro-active data movement in software dsm systems, 2004]
    "Translation of OpenMP to MPI" [Towards automatic translation of openmp to mpi, 2005]
    "a runtime inspection-based scheme for translating OpenMP applications" [Optimizing Irregular Shared-memory Applications for Distributed-memory Systems, 2006]

    But
    "Version 4.0 of the specification was released in July 2013.[...] It adds or improves the following features: support for accelerators [(e.g. General-Purpose Computing on Graphics Processing Units (GPGPU))]; atomics; [...]." This came before from our OS and from Apertos (Muse) integrated in our OS.
    Furthermore, from the document titled "A Reflective Architecture for an Object-Oriented Distributed Operating System", which is called Muse object model and used for the implementation of the Muse operating system, we recall the following:
    "Ultra large distributed systems (ULDS)"
    "uniform perspective", including Distributed Shared Memory (DSM)(?) and Distributed Global Address Space (DGAS)(?!) {DSM comes also from SPACE?}
    "self-advancing"
    "An ULDS will be composed of heterogeneous hardware", including non-shared memory systems and mobile computing systems
    "Interaction between objects is accomplished by message passing."
    We added GPGPU and other Accelerated Processing Units (APUs) (see the points "Multiprocessing (see Linux)" and "Parallel operating of graphic cards, and other multimedia cards from different manufacturers" listed in the Feature-List #1).

    Looks like an ordinary technological progress respectively a legal use at first sight, but this is not the case anymore in relation with our transformation of the

  • Internet into our Ontologic Net (ON) and
  • World Wide Web (WWW) and Semantic (World Wide) Web (SWWW) into our Ontologic Web (OW)

    on the basis of our

  • High Performance and High Productivity Computing Systems (HP²CSs) and
  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs)

    specifically our

  • SoftBionic (SB) supercomputer and
  • Ontologic supercomputer.

    This is also one of the reasons why we developed further the programming language Java, similar like the company IBM with X10 and others with UPC, UPC++, and other PGAS languages. But besides many other features they all also have not seen the non-shared memory part in contrast to us with our OS, which also integrates

  • both fields of shared memory systems and non-shared memory systems,
  • Global Address Space (DGAS)
    • Distributed Global Address Space (DGAS),
    • Partitioned Global Address Space (PGAS), and
    • Asynchronous Partitioned Global Address Space (APGAS),
  • communication protocols based on the message passing technique,
  • request-response protocols for Inter-Process Communication (IPC), such as for example
    • Remote Procedure Call (RPC) and
    • Remote Method Invocation (RMI),
  • Remoted Direct Memory Access (RDMA),
  • Accelerated Processing Units (APUs), such as for example of the types
    • Graphics Processing Unit (GPU),
    • Physics Computing Unit (PCU), and
    • SoftBionic Processing Unit (SBPU),
      • Intelligence Processing Unit (IPU),
        • Tensor Processing Unit (TPU),
        • Vision Processing Unit (VPU),
        • Neural Processing Unit (NPU),
  • and so on.

    The Cluster OpenMP extension for the Intel C++ Compiler was or is still affected as well, though it was dropped in version 13.0. The latter proves our claim about the different utilization in the field of non-shared memory systems.


    28.June.2018
    Preliminary investigation of Graphcore started
    Here is a teaser in relation with our geniuses behind a multi-million or perhaps billion U.S. Dollar fraud, which we got as the only positive takeaway of its investigation:
    "One aspect all recent machine learning frameworks have in common - TensorFlow, MxNet, Caffe, Theano, Torch and others - is that they use the concept of a computational graph as a powerful abstraction."
    As we said (see for example the Website review of the 2nd of February 2018), all these recent Machine Learning (ML) frameworks are based on our OntoBot, which is

  • based on our integration of the Poplog and Maude environments, and
  • integrated with our

    or said in other words the Ontologic System Architecture (OSA) also integrates the OntoFS and the OntoScope, which also includes graph visulization, with the OntoCore and the OntoBot, which includes the functionality based on rewriting theory, specifically term graph rewriting utilized for generating, transforming, and processing abstract semantic graphs or term graphs, and also other graph rewriting.
    Furthermore, we explained several times that parts of our OS can also be realized in hardware.

    In very short: Bingo!!! A little longer: As much as possible of the material about our original and unique, iconic Ontologic System has been stolen from our websites, specifically our websites of OntoLinux and OntomaX, and merely edited. The usual results of those illegal activities are infringing our copyright, damaging our reputation, making illegal agreements, and so on.

    Is not it?


    29.June.2018
    SOPR #123
    We have thought and discussed about the following four issues in more detail (once again) (see also the issue SOPR #121 of the 29th of May 2018 (section Update of the License Model (LM)).

    Issue #1
    Is the sale of a business share of a company, which is based on our Ontologic System (OS) or our Ontoscope (Os) or both, or our Ontologic Applicatons and Ontologic Services (OAOS), a revenue generated with our works of art respectively Intellectual Properties (IPs)?

    For example, a venture capital firm provides an investment service and invests in a start-up, that manufactures a product based on our OS or/and our Os, and gets a share of this start-up in return. The start-up grows and makes its Initial Public Offering (IPO). Later, the venture capital firm sells its share of this start-up at a stock market for example. In this moment it makes a revenue with its investment (service) based on our IPs, which is not a product, obviously, and therefore the revenue must be accounted like an OAOS.
    Furthermore, the start-up gets money for its business share in a first step. After its IPO the start-up gets money for the sale of another business share in a second step. In both cases, it makes a revenue with something else than its product or/and its service based on our IPs, and therefore the revenue must be accounted like an OAOS.
    Indeed, this example becomes more complicated when the

  • share is sold for another time, which requires a royalty for the transaction, or
  • investment is not made in a start-up but in a company already listed at a stock market, which requires a clear separation of the revenue sources.

    However that may work, investors have to pay their share as well or otherwise we will not allow the reproduction of our OS and our Os, or the performance of our OAOS, or both by a start-up or another company. There is no free lunch. For sure, double accounting has to be avoided in the case of a financial firm but the difference of invested capital and revenue remains to be the basis for calculation our share of the revenue.
    We also said so in the case of cryptocurrencies based on our IPs.

    Issue #2
    If a first company has paid the fee for the reproduction of a part of our OS or our Os or both, and a second company reproduces our OS or our Os or both in whole or in part, which comprises this part, then should this reproduction of this part be accounted separately from the accounting of the reproduction of the OS or the Os or both?

    For example,

    • a first Ontoscope (Os) manufacturer integrates a SoftBionic Processing Unit (SBPU) manufactured by itself and
    • a second Os manufacturer integrates a SBPU manufactured by a third company, that already paid the fee for the reproduction of the SBPU.

    The separate accounting method is done by companies like Qualcomm, which is debated by companies like Apple.
    If we remember correctly, we have already discussed this second issue and also made the decisions that

  • on the one hand the separate accounting is the right method in our case, and
  • on the other hand our License Model (LM) will be balanced for establishing fairness between such different licensees, which has become a quite complex task.

    By the way, the foundational definition of an Ontoscope does not require a SBPU, though there might be situations that match the example better.

    Issue #3
    If a model of a SoftBionic (SB) function has been created with our OS or our OAOS, then is it a reproduction of a part of our OS or a performance of our OAOS, which has to be accounted accordingly when given away?

    The answer to the first part of the question depends on the type of the created function.

    We said Yes in relation with the second part of the question, specifically in relation with for example the performance of an OAOS in the field of engineering, specifically of an OAOS based on the integration of

  • cloud computing,
  • Semantic (World Wide) Web (SWWW), generative design, and 3D manufacturing, or
  • Industry 4.0, for sure.

    Issue #4
    If a scientific result has been achieved with our OS, which includes as a basic property Problem Solving Environments (PSEs), by automating science, then is the overall revenue generated with this scientific result a performance of an OAOS and has to be accounted accordingly when given away?

    For example, a university utilizes our OS or/and an OAOS in whole or in part to develop a solution and patents this solution. In a subsequent step it licenses this patent to a licensee. For sure, the revenue generated with the licensing of this patent is still made with the performance of an OAOS.
    Furthermore, a licensee of this patent produces an item or provides a service based on this patent or does both. In these cases, the decision has to be made if the product is an OS or an Os, and the service is an OAOS, which would imply that this patent is about an OS, Os, or OAOS.

    We also said so in relation with for example the performance of an OAOS in the field of engineering.

    At this point, we have to recall once again that

  • open source is not free beer, [corrected later]
  • "nearly all free software is open source, and nearly all open source software is free", [Free Software Foundation],
  • in accordance with the copyright law our OS and our Os can only be licensed by their creator C.S., which means that something like Copyright © 2006-**** C.S., Ontonics, or/and Society for Ontological Performance and Reproduction (SOPR) is written in an accredited open source license or license extension, and
  • we will not make any further concessions.

    The open source issue also shows a general strategy behind it, which

  • will not work as in the case of Microsoft Windows vs. Linux, Android, and Co. due to the fact that Windows was not a work of art created by William "Bill" Gates in contrast to the OS and the Os created by C.S., but
  • shows that the companies Google, Amazon, Intel, Oracle, IBM, unbelievable but true, even Microsoft, Uber, for sure, the members of the OpenAI organization, and so on are responsible for this way of orchestrated blackmailing, which is much more grave than infringing our copyright and damaging our reputation.

    Due to the latest findings we are thinking about the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR) once again, specifically about suspending the AoA and correspondingly the SOPR until 2025 and restructuring the LM, which would increase the fees and share in some circumstances. So to say, we are thinking about making our bet. :)

    By the way, there is no problem to catch 1,000 or even more managers working in the Silicon Valley, U.S.A., and elsewhere, so we can handle that overall legal issue and we will do it with the support of many federal authorities if our related workload might exceed a limit in the near future. Indeed, it has become a question about social ethics and moral values if such a behaviour should be tolerated by the societies and their constitutional states.
    Honestly, we are not sure what is the better way, but the marshal is already in town and the cavalry has already saddled up as well.

  •    
     
    © and/or ®
    Christian Stroetmann GmbH
    Disclaimer