Home → News 2022 February
 
 
News 2022 February
   
 

04.February.2022

02:20 and 15:37 UTC+1
Investigations::Mutlimedia

New York Times
The newspaper New York Times refuses democracy and lies again.
We quote once again a report of the The New York Trolls, which is about our Ontoverse (Ov), obviously: "[... a company] tries to focus on the new idea of a metaverse [...]."
[...]
The news, along with increased spending as [a company] tries to focus on the new idea of a metaverse, [...].
[...]
[...] The company views the metaverse as the next generation of the internet, in which people will share virtual experiences. It lost more than $10 billion in 2021 as it built the virtual reality goggles and smart glasses that will make it possible for users to access the metaverse.
[...]
[...] app and the augmented reality glasses [...]. [...]

[Textbox:] What Is the Metaverse, and Why Does It Matter?
The origins. The word "metaverse" describes a fully realized digital world that exists beyond the one in which we live. It was coined by Neal Stephenson in his 1992 novel "Snow Crash," and the concept was further explored by Ernest Cline in his novel "Ready Player One."
An expanding universe. The metaverse appears to have gained momentum during the online-everything shift of the pandemic. The term today refers to a variety of experiences, environments and assets that exist in the virtual space.
Some examples. Video games in which players can build their own worlds have metaverse tendencies, as does most social media. If you own a non-fungible token, virtual-reality headset or some cryptocurrency, you're also part of the metaversal experience.
How Big Tech is shifting. Facebook staked its claim to the metaverse last year, after shipping 10 million of its virtual-reality headsets and announcing it had renamed itself Meta. Google, Microsoft and Apple have all been working on metaverse-related technology.
The future. Many people in tech believe the metaverse will herald an era in which our virtual lives will play as important a role as our physical realities. Some experts warn that it could still turn out to be a fad or even dangerous."

Comment
First of all, the original Metaverse is not related to a Non-Fungible Token (NFT), cryptocurrency, or other such elements, but is merely an immersive 3D Virtual World (VW) or Immersive Virtual Environment (IVE or ImVE) respectively Virtual Reality Environment (VRE), and is not an Augmented Reality Environment (ARE), Augmented Virtuality Environment (AVE), Mixed Reality Environment (MRE), eXtended Mixed Reality Environment (XMRE) or eXtended Reality Environment (XRE), or any other Reality Environment (RE).

The

  • Ontologic Net (ON) with its Ontologic Net of Things (ONoT) is the successor of the Internet and the Internet of Things (IoT) as the transformation of the Internet into an Interconnected supercomputer (Intersup) and the integration of Cybernetics, Hard- and SoftBionics, and Robotics with the Internet and the Intersup as the (architecture for) Resilient, Cybernetic, Autonomic, and Space-Based Computing and Networking (RCASBCN), or better said,Ontologic High Safety and Security Computing and Networking (OS²N),
  • Ontologic Web (OW) with its Ontologic Web of Things (OWoT) is the successor of the World Wide Web (WWW), the Semantic (World Wide) Web (SWWW), and the Web of Things (WoT) as the (architecture for) Semantic and Cognitive High Performance and High Productivity Computing and Networking (CCHP²CN), or better said Ontologic High Performance and High Productivity Computing and Networking (OHP²CN), and
  • Ontologic uniVerse (OV) is the successor of the reality as the fusion of all real and physical, cybernetical and digital, and virtual and metaphysical (information) spaces, environments, worlds, and universes respectively realities to the New Reality (NR) (spacetime fabric) as (the architecture for) something entirely or totally new,

    which collectively are our Ontoverse (Ov), which again is the representation of the fusioned realities and worlds, the manifestation of our New Reality (NR), or the New Reality Environment (NRE) of our Ontologic System (OS), and also includes what is wrongly called metaverse, multiverse, web3, or whatsoever.

    The Ontoscope (Os) is the access place or access device, and includes the variants Apple iPhone, iPad, and Apple Watch, Google, Samsung, and Co. Android Smartphone, Smarttablet, Smartwatch, and Smartdevices, Microsoft Surface and HoloLens, and Meta (Facebook) Oculus.

    Our OS with its Ov, ON, OW, and OV, and also Os is neither an idea nor something new, but an artist-related or personal, original and unique, and already iconic creation of C.S., which was publicated more than 15 years ago and was absolutely unforeseeable and unexpected by an expert in the related fields respectively a Person of Ordinary Skill In The Art (POSITA) in the end of October 2006, and therefore our OS in whole or in part is a copyrighted work of art of C.S..

    Do yourself a favour and do not provide every day others arguments to call a certain group whatsoever.


    07.February.2022

    Ontonics Further steps

    We have found some new takeover candidates and we are sure that some entities will rub their eyes after they found which one these are. :)

    Style of Speed Further steps

    We noted that our model series 9EE, specifically in its newest generation announced and discussed in the Further steps of the 5th, 9th, and 29th of January 2021, already sparked a broader interest and left a deep impression.

    We also have designed the last exterior components of our special fun projects 9x9 and 91x, that are now ready for realization respectively production. In the end it was much easier than we thought at first.
    But honestly, we might merge both pathfinder projects to one tire slaying, tarmac destroying, and mind blowing model called 91E or so.

    We would also like to share some more informations about our special fun project SF SP mentioned in the Further steps of the 5th and 7th of January 2021.
    What we have not publicated at that time was our development of a twin-turbo forced induction system for the model 812 Superfast, which is also the basic model of the model Monza SP, also known as 812 Speedster, to give both models what can never be enough:
    More Power - More Style - More Fun.

    But eventually, we have not gone further in this direction, because

  • on the one hand we are not fans of cars with a long front section and therefore already designed the SF SP, and
  • on the other hand we also developed a contemporary improvement for the model SF90, which does not only comprise more combustion power increased by more than 150 kW (203 PS / 201 hp), but also more Pure Electric™ Power (PEP) with improved electric motors and doubled battery capacity. Maybe we call this conversion Lampo Rampante==Prancing Flash due to the estimated overall system power of 1,050 kW (1,427 PS / 1,408 hp).


    08.February.2022

    Ontonics Blitz Fund I #25.4.13

    As we said before, our Superbolt™ #4 Electric Power (EP) has simply reimagined the battery again and again.
    Now, we extended its portfolio of solutions, electric energy storage devices, and related supplements once again.

    We have a new form factor of our range extender devices, which can easily by carried, literally spoken, by hand or in a trunk of a vehicle, because it is basically a trolley, rolling luggage, or wheeled suitcase with an extractable or telescoping handle and correspondingly called PowerCase.
    The dimensions of the PowerCase devices are chosen so that in most cases a PowerCase can be

  • placed upright on the side of a car or
  • laid under a car when charging.

    Furthermore, we also designed and developed a conversion kit for existing cars, which is a branching from the original power socket and provides an additional socket in a trunk.
    In this way, vehicle owners can use one of our

  • electric energy storage devices and do not need to buy the largest battery pack of the manufacturer, and
  • range extender devices by simply
    • getting it from one of our millions of worldwide distribution points,
    • putting it into the trunk, and
    • connecting it with the additional socket even by utilizing the original charging cable.

    We already guess that our new conversion kit will become an optional or even standard equipment of new vehicles.

    We also already developed a new expansion of our battery-sharing system, which is basically the same like the scooter-sharing system with our PowerCases instead of the electric motorized scooters.
    Indeed, we are already imagining to see our PowerCases at every street corner.

    Pelican case 1510Pelican case 1510

    ©© BY 2.0 Miki Yoshihito

    And there is more to come. :)


    09.February.2022

    Style of Speed Further steps

    We have configured one of our Active Components and can only say that the result is truly convincing, so to say, even those persons, who always mentioned a foundational problem respectively deficit with a more general solution.


    10.February.2022

    21:08 and 22:34 UTC+1
    Investigations::Mutlimedia

    New York Times
    The newspaper New York Times refuses democracy and lies again.
    We quote the next lying report of one of those dirty fellows of the New York Trolls: "[...] And Meta disclosed that it had spent $10 billion last year building out its new namesake, the metaverse, the virtual-reality wonderland that Facebook is betting will be the internet's next big thing - but that, so far, remains more virtual than reality.
    [...]
    Meanwhile, it's easy to see why investors might be skeptical that Facebook is the company that will invent the next big thing, whether the metaverse or whatever else.
    [...]
    For many years, the photostat [(photostatic copy or photocopy)] strategy worked out just fine. There wasn't even really anything dishonorable about it; the best ideas in tech - or, for that matter, in life - are often pastiches of lots of different ideas. As Steve Jobs said: "Good artists copy. Great artists steal." [...]
    [...]
    Look, again, at Instagram. When the app started, it was a simple feed of photos. Over the years Facebook has loaded it up it with a slew of features picked up elsewhere. Instagram now broadcasts live streams - a feature first pioneered by start-ups like Twitch and Periscope. One of Instagram's most popular features is Stories, a kind of photo diary of a user's day. The Stories format was invented by Snapchat, whose success in the early 2010s looked like it posed a threat to Facebook's dominance. [...]
    [...]
    I'm not surprised. I use Instagram often, but I find it increasingly messy. It's a dog's breakfast of lots of different social features all sitting uncomfortably together - a place for permanent photos, for ephemeral stories, for influencers' short videos and even for shopping. The Facebook app, meanwhile, feels like a lost cause of bloat; like a restaurant that serves too many different kinds of cuisines, the app tries to do so much it ends up doing almost nothing well."

    Comment
    What an ... liar. That is not an opinion, but just only the next publication, which has the only goal to damage the rights and properties, as well as the reputation and integrity of C.S. and our corporation, and therefore is just only a crime, but not free speech or whatsoever at all.
    Everybody in the industries knows that C.S. created the Evolutionary operating system (Evoos) in the year 1999 and then created its successor with the Ontologic System (OS) with its Ontoverse (Ov) and Ontoscope (Os) in the following years until the year 2006, including

  • what is wrongly and illegally called cloud computing, metaverse, multiverse, Decentralized Web (DWeb), Web 3.0, Web3, and so on, because it is the Ontoverse (Ov), and
  • what is wrongly and illegally called Apple iOS and iPhone, Google, Samsung, and Co. Android and Smartphone, Microsoft Windows since version Vista (6), and Surface and HoloLens, Meta (Facebook) and Oculus, and so on, because it is the Ontoscope (Os),

    which since 2006 is already the next big thing, but not in the Internet and the World Wide Web (WWW), but their successor, and therefore has not to be invented at all.
    And because the Evoos and OS with the Ov and Os are artist-related or personal, original and unique, and already iconic expression of ideas, creations, or works of art, which were publicated more than 21 years respectively 15 years ago and were absolutely unforeseeable and unexpected by an expert in the related fields respectively a Person of Ordinary Skill In The Art (POSITA) in the mid of December 1999 respectively in the end of October 2006, they are copyrighted and prohibited for fair use and democratization worldwide.

    What an ... antisocial moron. For sure, it was, is, and will be always dishonorable to steal.
    The author W. H. Davenport Adams already said in 1892 that "to imitate" was commendable, but "to steal" was unworthy: "That great poets imitate and improve, whereas small ones steal and spoil."
    But it was the poet T. S. Eliot, who said in 1920 that "to imitate" was shoddy, and "to steal" was praiseworthy: "Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different." That statement became "The immature poet imitates and the mature poet plagiarizes" in 1949, "Immature artists borrow; mature artists steal" in 1959, "Immature artists imitate. Mature artists steal" in 1962, "Immature artists copy, great artists steal" in 1974, "Lesser artists borrow; great artists steal" in 1986, and then came the cleptomanic Steve Jobs with "Good artists copy. Great artists steal." in 1988 and "Ultimately it comes down to taste. It comes down to trying to expose yourself to the best things that humans have done and then try to bring those things in to what you're doing. I mean Picasso had a saying he said good artists copy great artists steal. And we have always been shameless about stealing great ideas." in 1996.
    At least, he attributed the saying to Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso, but it is another shame that the U.S.America is celebrating that other big lie in relation to our Ontologic System (OS) and Ontoscope (Os) as a national achievement at a large exhibition, which the commentator C. E. M. Joad already noted in 1927: "Whereas in Europe the height of originality is genius, in America the height of originality is skill in concealing origins." We also could formulate it in relation to quality and quantity.
    So it is time to stop the steal and correct the attribution of the iOS iPhone and Android Smartphone as well. Is not it?
    Howsoever, we license, sue, and take back the stolen items and plagiarisms.

    What an ... incompetence. Before Meta's (Facebook's) market capitalization decreased we publicated the note SOPR Co²S and S³ on the 25th of January 2022, which has been included in the related section Social and Societal System (SoSoS or S³) of the issue SOPR #33l of the 25th of January 2022
    "Has anybody thought we would wait for the next stunt?"

    And that dirty fellow even discusses parts of it, as can be seen with the last two sections quoted above.

    By the way:

  • We are allowed by constitution and law to blacklist any media company, which refuses to respect all of the rights and properties of C.S. and our corporation, because such an infringement is considered a reasonable cause to do so.


    18.February.2022

    20:27 UTC+1
    OntoLinux and OntoLix Website update

    *** Work in progress ***
    We added and marked documents on the webpage Links to Software of the website of OntoLinux.
    We added to the section Intelligent/Cognitive Agent of the webpage Links to Software ...:

  • Manchester Metropolitan University, Department of Computing, and Queen Mary & Westfield College, Department of Electronic Engineering, M. Wooldridge and N.R. Jennings: Intelligent agents: Theory and Practice [PDF]
  • Carnegie Mellon University, School of Computer Science, and Fujitsu Laboratories: Oz project
    • Joseph Bates , A. Bryan Loyall , and W. Scott Reilly: An Architecture for Action, Emotion, and Social Behavior [PDF]
    • Joseph Bates , A. Bryan Loyall , and W. Scott Reilly: Integrating Reactivity, Goals, and Emotion in a Broad Agent [PDF]
    • Joseph Bates: The Role of Emotion in Believable Agents [PDF]
    • W. Scott Reilly: Synergistic Capabilities in Believable Agents [PDF]

    We also marked Cougaar, and PAL with CALO by **.

    For a discussion of these works see also the Clarification of today and the Clarifications cited therein.

    20:22 and 29:52 UTC+1
    Clarification

    *** Work in progress - some quotes, informations, better wording, and better explanation missing ***
    In this Clarification we give more informations in relation to the

  • Clarification of the 6th of May 2016, which is focused on the field of Virtual Environment (VE) and Intelligent Virtual Environment (IVE),
  • Clarification of the 18th of July 2021, which is focused on Ubiquitous Computing (UbiC) and Internet of Things (IoT), Cyber-Physical System (CPS), and Building Automation System (BAS) and Building Management System (BMS), which is also known as Intelligent Environment (IE), smart environment, responsive environment, intelligent building, intelligent home, smart building and smart home, and Ambient Intelligence (AmI), and
  • Clarification of the 25th of December 2021, which is focused on the various Web x.0,

    and continue the discussion of the various fields and related works in relation to

  • Artificial Life (AL),
  • Agent-Based System (ABS), including
    • Intelligent Agent System (IAS),
    • Multi-Agent System (MAS),
    • Holonic Agent System (HAS), and
    • Cognitive Agent System (CAS),

    but also

  • LifeLogging (LL),

    as well as

  • Caliber/Calibre,
  • Ontoverse (Ov), and
  • much more.

    At first, we quote a document about Artificial Life (AL).
    Then we quote a review about the field of Intelligent Agent System (IAS), including Mobile Agent System (MAS or MobAS), Interface Agent System (IAS or IntAS or InterAS), Information Agent System (IAS or InfAS), and Believable Agent System (BAS) and animated Virtual Worlds (VWs), and Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot).
    Then we quote documents about the field of Multi-Agent System (MAS or MulAS), including Holonic Agent System (HAS) and animated Virtual Worlds (VWs),
    Then we quote documents about our Evolutionary operating system (Evoos)
    Then we quote a document about our Ontoverse (Ov), wrongly called Metaverse.
    This clarification is finalized with a summary and additional explanations about our Evoos and OS.

    The quoted and commented works include:

  • Artificial-Life Simulators and Their Applications
  • Intelligent agents: Theory and Practice
  • Teleporting - Making [X Window System] Applications Mobile
  • Immobile Robots [] AI in the New Millennium

  • Self-Organization in Multiagent Systems: From Agent Interaction to Agent Organization
  • Watching Your Own Back: Self-Managing Multi-Agent System

  • Meeting the Computational Needs of Intelligent Environments: The Metaglue [Multi-Agent] System

  • Integrating Reactivity, Goals, and Emotion in a Broad Agent
  • Synergistic Capabilities in Believable Agents

  • Test homepage of the Equator project

  • COGs: Cognitive Architecture for Social Agents
  • SIF - The Social Interaction Framework [] System Description and User's Guide to a Multi-Agent System Testbed
  • [Social Interaction Framework for Virtual Worlds (]SIF-VW[)]: An integrated system architecture for agents and users in virtual worlds
  • Reality and Virtual Reality In Mobile Robotics
  • Visual Programming Agents for Virtual Environments
  • Multi-agent Systems as Intelligent Virtual Environments

  • Agent Chameleons: Agent Minds and Bodies
  • Agent Chameleons: Virtual Agents [Powered By] Real Intelligence
  • NEXUS: Mixed Reality Experiments with Embodied Intentional Agents

  • Personalized Assistant that Learns

  • Online encyclopedia about the subject lifelog
  • LifeLog

  • Creation of the Roadmap 1.0

    We quote a document, which is about Artificial Life (AL) and was publicated in January 1995: "Artificial-Life Simulators and Their Applications
    [...]

    Connections between Alife and Traditional Computer Science
    Below we undertake a detailed examination of a number of representative Alife simulators, packages which provide a set of tools for building simulations of particular processes and phenomena. Many of the technical issues which must be addressed in Alife simulation are related to issues in traditional computer science, artificial intelligence in particular. This section seeks to place Alife simulation efforts in a computer-science context.

  • Generalities
  • Specific Problem Domains

    Generalities
    In the field or artificial intelligence, Alife-related work proceeds under the general heading of autonomous agents.
    An autonomous agent is a program which contains some sort of sensor and effector system. The agent operates within a software environment such as an operating system, a database, or a computer network. The sensors are used to observe features of this external environment. The effectors may alter the state of the environment or the state of other agents. Software agents pursue goals such as acquiring information about the environment or modifying its state, either individually or in teams. They do so without continuous intervention of a programmer/user.
    In part, the autonomous-agent community breaks into sub-communities along geographic lines. While in North America this work is treated under the heading "distributed artificial intelligence (DAI)", in Japan one refers to "Multi-Agent and Cooperative Computing (MACC)" and in Europe, "Modeling Autonomous Agents in a Multi-Agent World (MAAMAW)"[.]
    The subset of artificial life work which is conducted in artificial intelligence departments is typically distinguished by several hallmark features.

  • It is tightly coupled to practical, industrial problems.
  • While biological metaphors abound in this literature, no serious attempt is made to connect with empirical natural science.
  • While the autonomous-agent movement within artificial intelligence is a movement toward sub-symbolic [subsymbolic], physically grounded (situated) computation, the traditional artificial intelligence concern to explain higher-order cognitive processing remains in clear evidence.

    [...]

    General simulation platforms
    SOAR is a major artificial intelligence simulation platform [...]. It is designed to simulate the behavior of collections of expert systems. A related system is CLIPS, though originally a single-agent expert system, a multi-agent extension has been developed [...].

    Specific Problem Domains
    [...]

    Learning
    There are extensive interconnections between the fields of neural networks and artificial intelligence. [...]

    Virtual Reality
    Virtual Reality (VR) is a burgeoning field of computer science with widespread practical applications and tight connections with artificial life. Both Virtual Reality and artificial life practitioners seek to use the computer to represent life-like processes operating in artificial, but life-like worlds. There are marked differences in style between the two fields: the user of a VR simulator is often a participant in the activities of the artificial world, while this is seldom the case in a Alife simulator.

    Autonomous Agent Psychology/Robotics
    [...]

    Traffic Control
    A typical application for distributed artificial intelligence is in the control of traffic. Some traffic control may concern physical vehicles [...], or simply the flow of informations packets in a network [...].
    Air traffic control [...].

    Intelligent Manufacturing
    Yet another domain in which monolithic, centralized control structures are giving way to distributed systems of agents. In this approach, which sometimes goes under the name of "Holonic Manufacturing", each machine or process is endowed with an autonomous-agent controller. The agent monitors the state of its machine, tries to satisfy its "needs" in terms or raw material etc., possibly competing with other agents for resources. [...]

    Education
    Artificial intelligence has been extensively as the basis for computer-aided instruction. A number of Alife simulators have been developed to teach principles of biology, especially to children. [...]

    Computer Viruses
    A computer virus may be viewed as a kind of autonomous agent. It is a computer program which attempts to satisfy an agenda without continuous human intervention in its operation. In practice, viruses are distinguished from autonomous agents in that they are generally rather simple in construction, and generally have but one major aim: to reproduce and spread copies of themselves to many computers. All computer programs depend on other programs (such as the operating system) to execute. However, viruses are distinguished by the fact that they often integrate their code directly into that of other programs, such that execution of the host program causes execution of the viral program.
    [...]

    The Fundamental Algorithms of Artificial Life
    The motor propelling most artificial life simulations is an algorithm which allows artificial creatures to evolve and/or adapt to their environment. Each of these algorithms is a major topic in itself, with wide-spread scientific and industrial applications. [...]
    The fundamental algorithms fall into two dominant categories [(list points added):

  • learning algorithms, typified by neural networks and
  • evolutionary algorithms, typified by genetic algorithms[, genetic programming, etc.].

  • Neural Networks
  • Evolutionary Algorithms
  • Cellular Automata"

    Comment
    We do not think that the explanation regarding the fields of Artificial Intelligence (AI) and autonomous agent is correct, because AI and CI are different fields, specifically when one looks at deliberative architectures in the field of Intelligent Agent System (IAS or IntAS) (see the document "Intelligent agents: Theory and Practice" quoted next).

    Howsoever, the connection of the field of AL and our Evoos is obvious.
    If we substitute the field of Cellular Automata (CA) with the field of Fuzzy Logic (FL) in relation to the dominant categories of Artificial Life (AL), then we get the field of Computational Intelligence (CI).
    If we also add probabilistic, then we get the field of Soft Computing (SC).

    We quote a review, which is about the field of Intelligent Agent System (IAS) and was publicated in October 1994: "Intelligent agents: Theory and Practice
    [...]

    What is an Agent?
    [...]

    A Weak Notion of Agency
    Perhaps the most general way in which the term agent is used is to denote a hardware or (more usually) software-based computer system that enjoys the following properties:

  • autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state [Guarantees for autonomy in cognitive agent architecture. 1995];
  • social ability: agents interact with other agents (and possibly humans) via some kind of agent-communication language [Software agents .1994];
  • reactivity: agents perceive their environment, (which may be the physical world, a user via a graphical user interface, a collection of other agents, the INTERNET, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it;
  • pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative.

    A simple way of conceptualising an agent is thus as a kind of UNIX-like software process, that exhibits the properties listed above. [...] in mainstream computer science, the notion of an agent as a self-contained, concurrently executing software process, that encapsulates some state and is able to communicate with other agents via message passing, is seen as a natural development of the object-based concurrent programming paradigm [ACTORS: A Model of Concurrent Computation in Distributed Systems. 1986; An object-oriented language for distributed artificial intelligence. 1993].
    [...]
    A softbot (software robot) is a kind of agent:
    'A softbot is an agent that interacts with a software environment by issuing commands and interpreting the environment's feedback. A softbot's effectors are commands (e.g., UNIX shell commands such as mv or compress) meant to change the external environment's state. A softbot's sensors are commands (e.g., pwd or ls in UNIX) meant to provide ... information.' [Building softbots for UNIX. 1994]

    A Stronger Notion of Agency
    For some researchers - particularly those working in AI - the term 'agent' has a stronger and more specific meaning than that sketched out above. These researchers generally mean an agent to be a computer system that, in addition to having the properties identified above, is either conceptualised or implemented using concepts that are more usually applied to humans. For example, it is quite common in AI to characterise an agent using mentalistic notions, such as knowledge, belief, [desire, goal or plan (rule or action),] intention, [event,] and obligation [Agent-Oriented Programming. 1993]. Some AI researchers [of the Oz project] have gone further, and considered emotional agents [An Architecture for Action, Emotion, and Social Behaviour. July 1992] [The Role of Emotion in Believable Agents. April 1994]. [...] Another way of giving agents human-like attributes is to represent them visually, perhaps by using a cartoon-like graphical icon or an animated face [Agents that Reduce Work and Information Overload. 1994].

    Other Attributes of Agency
    Various other attributes are sometimes discussed in the context of agency. For example:

  • mobility is the ability of an agent to move around an electronic network [Telescript technology: The foundation for the electronic marketplace. 1994];
  • veracity is the assumption that an agent will not knowingly communicate false information [A Theoretical Framework for Computer Models of Cooperative Dialogue, Acknowledging Multi-Agent Conflict. 1988];
  • benevolence is the assumption that agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it [ Deals among rational agents. 1985]; and
  • rationality is (crudely) the assumption that an agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit [A Theoretical Framework for Computer Models of Cooperative Dialogue, Acknowledging Multi-Agent Conflict. 1988].

    (A discussion of some of these notions is given below; various other attributes of agency are formally defined in [Formalizing properties of agents. 1993].)

    Agent Theories
    [...] Our starting point is the notion of an agent as an entity 'which appears to be the subject of beliefs, desires, etc.' [Agent Theories and Architectures. 1989]. The philosopher Dennett has coined the term intentional system to denote such systems.

    Agents as Intentional Systems
    [...] The intentional notions are thus abstraction tools, which provide us with a convenient and familiar way of describing, explaining, and predicting the behaviour of complex systems.
    Being an intentional system seems to be a necessary condition for agenthood, but is it a sufficient condition? In his Master's thesis, Shardlow trawled through the literature of cognitive science and its component disciplines in an attempt to find a unifying concept that underlies the notion of agenthood. He was forced to the following conclusion:
    'Perhaps there is something more to an agent than its capacity for beliefs and desires, but whatever that thing is, it admits no unified account within cognitive science'. [Action and agency in cognitive science.1990]
    So, an agent is a system that is most conveniently described by the intentional stance; one whose simplest consistent description requires the intentional stance. Before proceeding, it is worth considering exactly which attitudes are appropriate for representing agents. For the purposes of this survey, the two most important categories are information attitudes and proattitudes:
    [Graphical representation:] information attitudes {belief[,] knowledge [] pro-attitudes {desire[, goal or plan (rule),] intention[,] obligation[,] commitment[,] choice[,] ...
    Thus information attitudes are related to the information that an agent has about the world it occupies, whereas pro-attitudes are those that in some way guide the agent's actions. [...] it seems reasonable to suggest that an agent must be represented in terms of at least one information attitude, and at least one pro-attitude. Note that pro- and information attitudes are closely linked, as a rational agent will make choices and form intentions, etc., on the basis of the information it has about the world. [...]

    [...]

    Possible Worlds Semantics
    The possible worlds model for logics of knowledge and belief was originally proposed by Hintikka [Knowledge and Belief. 1962], and is now most commonly formulated in a normal modal logic using the techniques developed by Kripke [Semantical analysis of modal logic.1963] [(Footnote:] In Hintikka's original work, he used a technique based on 'model sets', which is equivalent to Kripke's formalism, though less elegant. [...][)] [...] epistemic alternatives [...]
    On a first reading, this seems a peculiarly roundabout way of characterizing belief, but it has two advantages. First, it remains neutral on the subject of the cognitive structure of agents. It certainly doesn't posit any internalized collection of possible worlds. It is just a convenient way of characterizing belief. Second, the mathematical theory associated with the formalization of possible worlds is extremely appealing [...].
    [...] Epistemic logics are usually formulated as normal modal logics using the semantics developed by Kripke [Semantical analysis of modal logic.1963]. [...]

    [...]

    Theories of Agency
    [...]

    [...] belief, desire, intention [(BDI)] architectures
    [...]

    [...]

    Further Reading
    [...] A variant on the possible worlds framework, called the recursive modelling method, is described in [Elements of a Utilitarian Theory of Knowledge and Action. 1993]; a deep theory of belief may be found in [A New Formal Model of Belief. 1994].

    Agent Architectures
    [...]

    Classical Approaches: Deliberative Architectures
    [...]

    Planning agents
    [...]

    [...] - [Intelligent Resource-bounded Machine Architecture (]IRMA[)]
    [...] researchers have considered frameworks for agent theory based on beliefs, desires, and intentions [Modeling rational agents within a BDI-architecture. 1991]. Some researchers have also developed agent architectures based on these attitudes. One example is the Intelligent Resourcebounded Machine Architecture (IRMA) [Plans and resource-bounded practical reasoning.1988].

    [...]

    [...] - GRATE*
    [...]

    Alternative Approaches: Reactive Architectures
    [...]

    Brooks - behaviour languages [or Behavior-Based Architectures and Subsumption Architectures]
    [...] subsumption architecture [...]

    Hybrid Architectures [or Mixed Reactive-Deliberative Architectures]
    [...] hybrid systems, which attempt to marry classical and alternative approaches.
    [...]

    [...] - [Procedural Reasoning System (]PRS[)]
    [...] Like IRMA, [...] the PRS is a belief-desire-intention architecture, [...] Beliefs are facts, either about the external world or the system's internal state. These facts are expressed in classical first-order logic. Desires are represented as system behaviours (rather than as static representations of goal states). A PRS plan library contains a set of partially-elaborated plans, called knowledge areas (KAs), [...]. KAs may be activated in a goal-driven or data-driven fashion; KAs may also be reactive, allowing the PRS to respond rapidly to changes in its environment. The set of currently active KAs in a system represent its intentions. These various data structures are manipulated by a system interpreter, which is responsible for updating beliefs, invoking KAs, and executing actions.

    [...] - TouringMachines
    [...] The architecture consists of perception and action subsystems, which interface directly with the agent's environment, and three control layers, embedded in a control framework, which mediates between the layers. Each layer is an independent, activity-producing, concurrently executing process.
    The reactive layer generates potential courses of action in response to events that happen too quickly for other layers to deal with. It is implemented as a set of situation-action rules, in the style of Brooks' subsumption architecture [...].
    The planning layer constructs plans and selects actions to execute in order to achieve the agent's goals. This layer consists of two components: a planner, and a focus of attention mechanism. [...] planner [...] focus of attention [...].
    The modelling layer contains symbolic representations of the cognitive state of other entities in the agent's environment. These models are manipulated in order to identify and resolve goal conflicts - situations where an agent can no longer achieve its goals, as a result of unexpected interference.
    The three layers are able to communicate with each other (via message passing), and are embedded in a control framework. The purpose of this framework is to mediate between the layers, and in particular, to deal with conflicting action proposals from the different layers. The control framework does this by using control rules.

    [...]

    [...] - InteRRaP
    InteRRaP, like [...] TouringMachines, is a layered architecture, with each successive layer representing a higher level of abstraction than the one below it [... Modelling reactive behaviour in vertically layered agent architectures. 1995 ...]. In InteRRaP, these layers are further subdivided into two vertical layers: one containing layers of knowledge bases, the other containing various control components, that interact with the knowledge bases at their level. At the lowest level is the world interface control component, and the corresponding world model knowledge base. The world interface component, as its name suggests, manages the interface between the agent and its environment, and thus deals with acting, communicating, and perception.
    Above the world interface component is the behaviour-based component. The purpose of this component is to implement and control the basic reactive capability of the agent. This component manipulates a set of patterns of behaviour (PoB). [...]
    Above the behaviour-based component in INTERRAP is the plan-based component. This component contains a planner that is able to generate single-agent plans in response to requests from the behaviour-based component. The knowledge-base at this layer contains a set of plans, including a plan library. The highest layer in INTERRAP is the cooperation component. This component is able to generate joint plans, that satisfy the goals of a number of agents, by elaborating plans selected from a plan library. These plans are generated in response to requests from the plan-based component.
    Control in INTERRAP is both data- and goal-driven. Perceptual input is managed by the world-interface, and typically results in a change to the world model. As a result of changes to the world model, various patterns of behaviour may be activated, dropped, or executed. As a result of PoB execution, the plan-based component and cooperation component may be asked to generate plans and joint plans respectively, in order to achieve the goals of the agent. This ultimately results in primitive actions and messages being generated by the world interface.

    Discussion
    [...] Some researchers have suggested that techniques from the domain of genetic algorithms or machine learning might be used to get around these development problems, though this work is at a very early stage.
    [...]
    [...] Humans seem to manage different levels of abstract behaviour with comparative ease; it is not clear that current hybrid architectures can do so.
    [...] Most work in AI assumes that an agent has a single, well-defined goal that it must achieve. But if agents are ever to be really autonomous, and act pro-actively, then they must be able to generate their own goals when either the situation demands, or the opportunity arises.
    [...]

    Agent Languages
    [...]

    Concurrent Object Languages
    Concurrent object languages are in many respects the ancestors of agent languages. The notion of a self-contained concurrently executing object, with some internal state that is not directly accessible to the outside world, responding to messages from other such objects, is very close to the concept of an agent as we have defined it. The earliest concurrent object framework was Hewitt's Actor model [Viewing control structures as patterns of passing messages. 1977; ACTORS: A Model of Concurrent Computation in Distributed Systems. 1986]; another well-known example is the ABCL system [ABCL- An Object-Oriented Concurrent System. 1990].

    [...] - agent-oriented programming [(AOP)]
    [...] first attempt at an AOP language was the AGENT0 system. The logical component of this system is a quantified multi-modal logic, allowing direct reference to time. [...] The logic contains three modalities: belief, commitment and [cap]ability. [...]
    [...]
    Corresponding to the logic is the AGENT0 programming language. In this language, an agent is specified in terms of a set of capabilities (things the agent can do), a set of initial beliefs and commitments, and a set of commitment rules. [...]

    [...]

    [...] - Concurrent MetateM
    [...]A Concurrent MetateM system contains a number of concurrently executing agents, each of which is able to communicate with its peers via asynchronous broadcast message passing. [...] The logical semantics of Concurrent MetateM are closely related to the semantics of temporal logic itself. This means that, amongst other things, the specification and verification of Concurrent MetateM systems is a realistic proposition [Specifying and verifying distributed intelligent systems. 1993]. [...]

    The IMAGINE Project - APRIL and MAIL
    [...] APRIL was designed to provide the core features required to realise most agent architectures and systems. Thus APRIL provides facilities for multi-tasking (via processes, which are treated as first-class objects, and a UNIX-like fork facility), communication (with powerful message passing facilities supporting network-transparent agent-to-agent links); and pattern matching and symbolic processing capabilities. [...] the MAIL language provides a rich collection of pre-defined abstractions, including plans and multi-agent plans. APRIL was originally envisaged as the implementation language for MAIL. [...]

    [...] - TELESCRIPT
    TELESCRIPT is a language-based environment for constructing agent societies [...].
    [...] There are two key concepts in TELESCRIPT technology: places and agents. Places are virtual locations that are occupied by agents [and include access devices, including mobile devices]. Agents are the providers and consumers of goods in the electronic marketplace applications that TELESCRIPT was developed to support. Agents are software processes, and are mobile: they are able to move from one place to another, in which case their program and state are encoded and transmitted across a network to another place, where execution recommences. Agents are able to communicate with one-another: [...]
    [...]

    [...]

    Applications
    [...]

    Cooperative Problem Solving and Distributed AI
    [...]

    Interface Agents
    [...]
    '[C]omputer programs that employ artificial intelligence techniques in order to provide assistance to a user dealing with a particular application. ... The metaphor is that of a personal assistant who is collaborating with the user in the same work environment.' [Social interface agents: Acquiring competence by learning from users and other agents. 1994]
    [...] A NewT agent is trained by giving it a series of examples, illustrating articles that the user would and would not choose to read. The agent then begins to make suggestions to the user, and is given feedback on its suggestions. NewT agents are not intended to remove human choice, but to represent an extension of the human's wishes: the aim is for the agent to be able to bring to the attention of the user articles of the type that the user has shown a consistent interest in. Similar ideas have been proposed [... with] prescient agents - intelligent administrative assistants, that predict our actions, and carry out routine or repetetive administrative procedures on our behalf [Prescient agents. 1992]
    There is much related work being done by the computer supported cooperative work (CSCW) community. CSCW is informally defined by Baecker to be 'computer assisted coordinated activity such as problem solving and communication carried out by a group of collaborating individuals' [Readings in Groupware and Computer-Supported Cooperative Work. 1993]. The primary emphasis of CSCW is on the development of (hardware and) software tools to support collaborative human work - the term groupware has been coined to describe such tools. Various authors have proposed the use of agent technology in groupware. For example, in [... the] participant systems [...] humans collaborate with not only other humans, but also with artificial agents [Participant systems. 1987].

    Information Agents and Cooperative Information Systems
    An information agent is an agent that has access to at least one, and potentially many information sources, and is able to collate and manipulate information obtained from these sources in order to answer queries posed by users and other information agents (the network of interoperating information sources are often referred to as intelligent and cooperative information systems [An organizational framework for cooperating intelligent information systems. 1992]. [...]

    Believable Agents
    There is obvious potential for marrying agent technology with that of the cinema, computer games, and virtual reality. The Oz project was initiated to develop:
    '... artistically interesting, highly interactive, simulated worlds ... to give users the experience of living in (not merely watching) dramatically rich worlds that include moderately competent, emotional agents.' [Integrating reactivity, goals, and emotion in a broad agent. 1992]
    In order to construct such simulated worlds, one must first develop believable agents: agents that 'provide the illusion of life, thus permitting the audience's suspension of disbelief' [The Role of Emotion in Believable Agents. April 1994]. A key component of such agents is emotion: agents should not be represented in a computer game or animated film as the flat, featureless characters that appear in current computer games. They need to show emotions; to act and react in a way that resonates in tune with our empathy and understanding of human behaviour. The Oz group have investigated various architectures for emotion [An Architecture for Action, Emotion, and Social Behavior. July 1992], and have developed at least one prototype implementation of their ideas [The Role of Emotion in Believable Agents. April 1994].
    [...]"

    Comment
    We have only quoted the more important points to keep the overall quote shorter.

    A software robot (softbot) and an immobile robot (immobot) are different (see the document titled "Immobile Robots [] AI in the New Millennium" and quoted in the Clarification of the 18th of July 2021 and below once again).

    The deliberative agent architecture BDI and the reactive agent architecture are also discussed in the Clarification of the 18th of July 2021.
    As we said elsewhere, we do not consider the BDI agent architecture as a cognitive agent architecture and the development in this field shows that BDI architectures were merely reinterpreted as cognitive architectures, including for example the PRS and the field of Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot). But eventually, these agent and robot architectures and systems only simulate cognitive properties, such as emotional and social capabilities and skills, and therefore such a designation is wrong and only a marketing trick to confuse the public.
    This also holds for ABS that use abductive logic, like for example CoMMA-COGs quoted below, if it is not used for any reflective action, but only for the application layer respectively planning by reasoning.

    Believable agents are also related to Distributed Artificial Intelligence (DAI), Multi-Agent and Cooperative Computing (MACC), or Modeling Autonomous Agents in a Multi-Agent World (MAAMAW) since at least 1994, as is the case with AgentSpeak (L) since 1996.
    In sum, we have Intelligent Agent System (IAS) or Intelligent Agent-Based System (IABS), Mobile Agent System (MAS or MobAS), Believable Agent System (BAS), and Simulated Reality (SR or SimR), Multi-Agent System (MAS), and through MBAS or Immobot, as well as Cybernetics and Cyber-Physical System (CPS) with Evoos.
    Also note that a BABS is not a Trustworthy ABS.

    Mobility of an agent is also called migration and teleportation (see the quoted document "Teleporting - Making Applications Mobile" below).

    But we note that a complete, consistent, or universal logical model for all subsystems (e.g. operating system, agent-based system, virtual environment system, application, etc.) and the overall system is missing.
    So we begun in relation to Evoos to lay the foundation with an

  • environment based on physical, virtual, cybernetical, and fusioned realities,
  • agent system,
  • operating system,
  • fault-tolerant operating system,
  • capability-based operating system,
  • microkernel,
  • validated and verified microkernel,
  • logical model,
  • mathematical model with numbers, including a zero and an empty set,
  • graph-based model with entities and relationships, including a blank,
  • ontological model with signs and an ontological zero,
  • semiotics,
  • Algorithmic Information Theory (AIT),
  • fractal,
  • and so on

    to have a complete trustworthy foundation in the whole observable universe and even slightly beyond through probabilistic, Bayesian, abductive logic, simulation of all possibilities, observation of realities, and so on.

    Also note that the hybrid agent architecture InteRRaP "was developed to meet the requirements of modeling dynamic agent societies such as interacting robots" respectively real-time real-world MAS, the mobile robot Khepera I was already utilized for robot swarms in the field of Swarm Intelligence (SI) in 1999, and InteRRaP is utilized for the control of robot swarms based on Khepera III, which was introduced in 2006.
    We also have the Mobile Robotic System (MRS) robotic dog, and companion, pal, or partner Sony Artificial Intelligence roBOt (AIBO).

    We quote a document, which is about the concept of teleporting, migrating, or mobile application, and was publicated in 1994: "Teleporting - Making [X Window System] Applications Mobile
    Abstract
    The rapid emergence of mobile computers as a popular, and increasingly powerful, computing tool is presenting new challenges. This subject is already being widely addressed within the computing literature. A complementary and relatively unexplored notion of mobility is one in which application interfaces, rather than the computer on which the applications run, are able to move.
    The Teleporting System developed at the Olivetti Research Laboratory (ORL) is a tool for experiencing such 'mobile applications'. It operates within the X Window System[...], and allows users to interact with their existing X applications at any X display within a building. The process of controlling the interface to the teleporting system is very simple. This simplicity comes from the use of an automatically maintained database of the location of equipment and people within the building. [...]"

    Comment
    Note that the windowing system X Window System is common on Unix-like operating systems.

    See also the very well known language-based environment TELESCRIPT of the 2D Virtual Environment (VE) Magic Cap in the document titled "Intelligent Agents: Theory and Practice" and quoted above.

    Also note that the document titled "The FIPA-OS agent platform: [...] Open Standard" and quoted in the upcoming Clarification of the 13th of April 2022 supports our point of view: "A common (but by no means necessary) attribute of an agent is an ability to migrate seamlessly from one platform to another whilst retaining state information, a mobile agent."

    We quote a document, which is about the field of Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot) of the National Aeronautics and Space Administration (NASA) and was publicated in 1996: "Immobile Robots [] AI in the New Millennium
    [...]
    [...] AI's central goal of developing agent architectures and a theory of machine intelligence [...] software environments, such as a UNIX shell and the World Wide Web, provide softbots with a set of ready-made sensors (for example, LS and GOPHER) and end effectors (for example, FTP and TELNET) that are easy to maintain but still provide a test bed for exploring issues of mobility and real-time constraints. [...]
    [...] the information-gathering capabilities of the Internet, corporate intranets, and smaller networked computational systems supply additional test beds for autonomous agents of a different sort. These test beds, which we call immobile robots (or immobots), have the richness that comes from interacting with physical environments yet promise the ready availability associated with the networked software environment of softbots. [...] Conversion of these and other realtime systems to immobile robots will be a driving force for profound social, environmental, and economic change.
    [...] at the focus of attention of immobile robots is directed inward, toward maintaining their internal structure, in contrast to the focus of traditional robots, which is toward exploring and manipulating their external environment. This inward direction focuses the immobot on the control of its complex internal functions, such as sensor monitoring and goal tracking; parameter estimation and learning; failure detection and isolation; fault diagnosis and avoidance; and recovery, or moving to a safe state. Metaphorically speaking, the main functions of an immobot correspond to the human nervous, regulatory, and immune systems rather than the navigation and perceptual systems being mimicked in mobile robots.
    [...] these immobots give rise to a new family of autonomous agent architectures, called model-based autonomous systems. Three properties of such systems are central: First, to achieve high performance, immobots will need to ["develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions" and] exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves to survive decades of autonomous operations. Hence, self-modeling and selfconfiguration make up an essential executive function of an immobot architecture. Second, to achieve these large-scale modeling and configuration functions, an immobot architecture will require a tight coupling between the higher-level coordination function provided by symbolic reasoning and the lower-level autonomic processes of adaptive estimation and control. Third, to be economically viable, immobots will have to be programmable purely from high-level compositional models, supporting a "plug and play" approach to software and hardware development.
    [...] Our work on these systems fuses research from such diverse areas of AI as model-based reasoning, qualitative reasoning, planning and scheduling, execution, propositional satisfiability, concurrent reactive languages, Markov processes, model-based learning, and adaptive systems. [...] Moriarty and Livingstone are grounded in two immobot test beds. Moriarty was part of the Responsive Environment [...], an intelligent building control system developed within the Ubiquitous Computing Project [...]. Livingstone is part of the Remote Agent, a goal-directed, fully autonomous control architecture, which will fly the [...] space probe [...]. [...]

    Model-Based Configuration Management
    Livingstone is a reactive configuration manager that uses a compositional, component-based model of the spacecraft to determine configuration actions [...]. Each component is modeled as a transition system that specifies the behaviors of operating and failure modes of the component, nominal and failure transitions between modes, and the costs and likelihoods [(probabilities)] of transitions [...]. Mode behaviors are specified using formulas in propositional logic, but transitions between modes are specified using formulas in a restricted temporal, propositional logic. [...] The spacecraft transition-system model is a composition of its component transition systems in which the set of configurations of the spacecraft is the cross-product of the sets of component modes. We assume that the component transition systems operate synchronously; that is, for each spacecraft transition, every component performs a transition.
    A model-based configuration manager uses its transition-system model to both identify the current configuration of the spacecraft, called mode identification (MI), and move the spacecraft into a new configuration that achieves the desired configuration goals, called mode reconfiguration (MR). [...]
    In practice, MI and MR need not generate all transitions and control commands, respectively. Rather, just the most likely transitions and an optimal control command are required. We efficiently generate these by recasting MI and MR as combinatorial optimization problems. [...] We efficiently solve these combinatorial optimization problems using a conflict-directed best-first search algorithm. [...]"

    Comment
    Note that in case an operating system is the environment of an autonomous software agent or softbot its sensors, and effectors or actuators are the shell commands (see also Agent-Based operating system (ABos) referenced in the section Exotic Operating System of the webpage Links to Software).

    Like other entities, we also noted at that time that the complexity in the fields of

  • Agent-Based System (ABS), including Intelligent Agent System (IAS), broad, Believable Agent System (BAS), Multi-Agent System (MAS), and Holonic Agent System (HAS), and
  • (information) spaces, environments, worlds, and universes, including Intelligent Environment (IE), Intelligent Physical Environment (IPE), Intelligent Virtual Environment (IVE), and Intelligent Cyber-Physical Environment (ICBE)

    increases exponentially with every additional capability. Our Evoos includes the solution, which is an IAS for

  • managing, controlling,
  • executing or operating,
  • computing or processing, and
  • monitoring, and als
  • learning. and
  • organizing

    a physical or virtual system or a hybrid of both in general and an IAS in particular.
    This led to a Reflective operating system (Ros or Refos), Multi-Agent System (MAS), Holonic Agent System (HAS), Distributed operating system (Dos), and Robotic operating system (Ros or Robos). Everything simply comes together seamlessly, as also shown in this Clarification.
    This mastery of complexity is also one of the reasons why we said that the Integrating Ontologic System Architecture (OSA) integrates all in one.
    How to implement: self-organizing MAS based polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS), specifically quick blackboard system respectively Scalable Distributed Tuplespace (SDT), Scalable Content-Addressable Network (SCAN), Space-Based technologies (SBx), Service-Oriented technologies (SOx), Distributed Ledger Technology (DLT), etc.
    Where to begin: interface between hardware and software, microkernel, operating system, middleware, and so on.

    See also for example the Clarification of the 18th of July 2021 for more details related to MBAS or ImRS, CAS and CRS, UbiC, IoT, Cyber-Physical System, and Evoos and OS.

    We quote a first document, which is about the fields of MAS and self-organization, and was publicated in July 2002: "Self-Organization in Multiagent Systems: From Agent Interaction to Agent Organization
    Abstract. In this paper we suggest a new sociological concept to the study of (self-) organization in multiagent systems. First, we discuss concepts of (self-) organization typically used in DAI. From a sociological point of view all these concepts are missing the special quality of organizations as self-organizing social entities. Therefore we present a concept of organization based on the habitus-field theory of Pierre Bourdieu. With reference to this theory, organizations are viewed as both "autonomous social fields" and "corporate agents" which are competing with other organizations in the same domain. Finally, we describe the Framework for Self-Organization and Robustness in Multiagent systems (FORM) corresponding to these sociological characteristics of organizations. This framework uses delegation as the central concept to define organizational forms and relationships in task assignment multiagent systems."

    Comment

    Our Evoos is

  • self-organizing,
  • multimodal,
  • Agent System,
  • reflective, fractal or holonic, and holologic, and
  • Distributed operating system (Dos),

    which implies MAS.

    See also the quote of "CoMMA-COGs ..." below.

    We quote a second document, which is about the fields of MAS and self-organization, and was publicated in 2005: "Watching Your Own Back: Self-Managing Multi-Agent System
    Abstract - We describe a category of multi-agent applications that address the problem of managing widely distributed Multi-Agent Systems (MAS) by introspectively managing both themselves and other co-resident MAS applications. The result is an extremely fault-tolerant system that has well-defined survivability parameters without requiring deep coordination between the primary application and the management layer. We argue that such an integrated systems-management solution can be both effective and simpler than an out-of-band solution would allow, and can leverage all the flexibility of a MAS infrastructure. [...]

    Introduction
    There are very few standard approaches to management of distributed multi-agent systems (MAS). A distributed MAS- based application should be able to be widely distributed, as massively parallel as the application logic allows, and insensitive to temporary infrastructure failures, among other attributes. These goals imply a management, monitoring, and control infrastructure with similarly optimistic goals.
    Application management is usually done external to the application, often in an extremely ad-hoc fashion. Other times it is handled at a very low level (e.g., SNMP), or through a simple replication and voting scheme (e.g., seti@home). [...]
    We propose that an effective approach to this problem is to build a DMAS application specifically for the purpose of managing its own survivability, and then extending its umbrella of influence to cover any other co-located MAS applications.
    In this paper we outline the general principles required of an application that will manage itself, as developed under the UltraLog program [1]. We then argue for the general applicability of this approach to other MAS. Finally, we describe our experience building part of such a management application, which we undertook by extending the Cougaar [2], [7] agent architecture to support an embedded agent restart mechanism.

    General Approach
    The general approach we propose can be summed up as the designing of a MAS application to manage itself, then extending the management application's responsibility to cover one or more non-management applications.
    [...]
    In addition, at decision points (where a single decision must be made and implemented), there must be a mechanism to decide which of (potentially) many management voices will be heard. Such decision points we call Management Agents (МA).
    Although we require that managers be decentralized to avoid inherent single points of failure, it is critical that choices be made coherently. This may be satisfied either by a decision voting mechanism (where a number of decision makers collaborate on a choice before allowing action), or by leader election (where a temporary decision maker is chosen from a group of candidates via a simple group algorithm). [So what is the problem with seti@home in particular and the foundational middleware system Berkeley Open Infrastructure for Network Computing (BOINC) in general?]
    [...]
    Information about the health and environment of the managed application needs to be available to decision makers. [...]
    Required features of an in-band MAS Management Application include:

  • Communication between management agents and agent (control) services. Minimally, MAs need to have some mechanism to control and monitor agents. Cougaar places a full-fledged agent on each agent life support server yielding, in effect, complete in-band access to a proxy for the host computer, and full control of the hosted agents. [...]
  • In-band communication between all managment agents and managed agents. Manager and managed agents need to be able to communicate freely under normal circumstances. Deviations from this standard do not preclude management, but limit the level of service available in such instances. [...]
  • Management API. . If a non-management agent can influence management choices by requesting a service or hinting at its state, the management layer can make more optimal and less disruptive choices [...].
  • Cooperation with other Management-like applications. While this paper focuses on issues of robust operations, similar arguments may be made for issues of security: the opportunity for cooperation between such management- like applications is highly desirable.
  • Visibility into neighboring management spheres of influence. As is true in organization management, it is often helpful to know how adjoining and overlapping management regions are performing, particularly when your non-management applications depend on or are depended-on by the agents in those regions.

    Any manageable agent must have a set of features:

  • [...]
  • Stored state must be portable. If an agent requires that any saved state information exist in order to recover from a failure, than that state must itself be shareable or portable so that the agent could be recovered on hardware other than the original. [...]
  • A deep measure of health. Host existence measures (ping response) are useful but not sufficient. Better measures focus on liveness (is the agent actually performing its duties) and liveliness (how well is the agent performing).
  • Health visibility. There needs to be some dependable measure of agent health visible from outside the agent. [...]
  • In-band mobility. If the agent infrastructure allows agents to move themselves (or to move others), then the management infrastructure may be able better to optimize and load-balance the agents under its control over the resources available to it.
  • Agent lifecycle control. Agent restart functionality is the basic function that needs to be accessible to the management application. If agents can be killed, stopped, or slowed in addition to being started and moved, then the control infrastructure has more control opportunities available.

    Applicability of Approach
    We have described how to build a MAS that is self-managing. We argue that this solution is quite good, because it leverages the strengths of multi-agent systems in general, and the strengths (and weaknesses) of the specific MAS as well as the management functions we are building themselves.
    We argue that building a MAS that manages itselfis both a) effective and b) efficient. In particular, building the MAS to manage itself means implementing the MAS management system both a) as agents, and b) in-band to the MAS application. There are good reasons for doing each. [...]

    Effectiveness of Agent Architecture
    In order to manage a distributed multi-agent system, there are several particular things required. First, the requirement to manage a distributed system drives the design requirements. To manage such a system requires communications and control points to be similarly distributed. Therefore, the management system must be distributed, must have some portion of its function co-located with the various agents, and must share requirements for efficient communications (of both information and control data). Each component of the management system resident with an agent requires sensors aware of its environment [...], intelligence to reason about the implications of that data, and communications to interact with other portions of the management system.
    Satisfying these requirements can be accomplished very effectively with an agent-based architecture. To understand why, consider the requirements. First, the basic functional requirements for distributed processing, reasoning, and communications all match nicely with an agent architecture. Additionally, a management system should be capable of making independent decisions for local agents, particularly in the face of disrupted communications. This clearly suggests an agent architecture.
    [...]
    Implementing our management system using agents Evoos has several other benefits. This approach provides resilience if any piece of the management system fails, and makes it easier to manage a distributed application. This approach also permits the management system easily to scale to support larger applications, and it provides the management system the ability to make dynamic changes to the control and communications patterns if necessary
    A key benefit of building an agent-management system using agents is that the management system understands the semantics of agent interactions. By working on the time and network scale of agents, the management system can effectively monitor agent communications and reason about the semantics of network load on agent function, for example. In other words, a management system built from agents is "application architecture aware" for an agent system. [We already have a name for this original and unique feature of our Evoos: Autonomic Computing (AC).]
    By implementing the management system using agents, we leverage all of the pros (and cons) of building systems using agents (redundancy, distributed processing, mobility, network vulnerabilities, etc). As a result, an agent architecture is an effective means for building a management system for a multi-agent system.

    Efficiency of In-Band Management
    An agent-management system developed using agents can be effective, but building that system as part of the same agent architecture results in efficiency gains in design, implementation, debugging, and ongoing operations. For example, the management sub-system re-uses many of the functions required by the agent application itself. Additionally, by building management as internal to the application, coordination among subsystems is more efficient. [And once again: Bingo!!! Guess why we use a Holonic Agent System (HAS) approach for Evoos.]

    Reuse
    The primary motivation for managing an agent system using an in band sub-system is the efficiency gains of reusing functions. To manage an application requires stable messaging, security, and other functions that are also required by the application itself. By re-using these components, we achieve efficiency of design, implementation, and debugging. Additionally we achieve efficiency of operations, by avoiding the overhead of extra systems. By using fewer subsystems, there are fewer components to develop, debug, secure, and make robust. As a result, the entire system is easier to manage and more survivable and trustworthy[.]
    In addition, the re-use of subsystems makes management of the management system itself easier. For example, our security components get robustness management and vice versa. Note therefore the inductive nature of this approach: by managing our application in-band, we also manage the management system, giving us a survivable survivability system.
    [...] by requiring the MAS infrastructure to support two applications - the MAS itself and the management system - we build increased flexibility and feature support into the infrastructure. This results in a more powerful infrastructure with more robust features, which is often more stable and easier to manage itself. In our experience [2], requiring the management system to use the features provided by the MAS usually does not greatly constrain the management capabilities, and the benefits in implementation and maintenance simplicity are substantial.

    Deep Coordination
    [...] By making the management function part of the application, no separate communication and coordination mechanisms are required. While the management function can thus avoid separate query mechanisms, the timing, detail and semantic content of the information connection can be much deeper using in-band management. This is because in-band managers have access to internal communications, interfaces, and constants of the agent application, giving the managers "first-hand" knowledge of the system state. Thus the management system is more than simply "application aware", but intrinsically understands the application, and at any point can balance the needs of the application with those for robustness, security, or other maintenance.
    [...] Therefore, our C.S.' approach to build MAS that are self-managing is a powerful and effective solution to the MAS-management problem. [...]

    The UltraLog Logistics Application
    UltraLog is a Defense Advanced Research Projects Agency (DARPA) sponsored research project focused on creating survivable, large scale, distributed-agent systems capable of operating effectively in chaotic environments. [...] The objective of the UltraLog project is to create a comprehensive capability that will enable a massive scale, trusted, distributed-agent infrastructure for operational logistics to be survivable under the most extreme circumstances.
    UltraLog's primary application is the planning and simulated execution of military-logistics operations. [...] As the plan is generated, we simulate execution of the plan, with associated changes in requirements and deviations between expected and observed behaviors.
    [...]
    In UltraLog, defense agents are added to the agent system to maintain system survivability and robustness. These defenses are implemented as agent applications that both manage the logistics agents and manage one another. [...]
    [...]
    Each enclave contains one or more management agents. Only one management agent within each enclave is active at any given time, controlled by an election algorithm that is described below. [...]
    Management agents are specified in the agent society configuration just like any other Cougaar agent. They leverage the basic capabilities of the Cougaar architecture, such as reliable, robust messaging, naming, servlet-based user interfaces, and blackboard-based agent communication channels. Management agents can be dynamically added, moved, and removed from the system. [...]
    Several forms of "health" sensors are used to monitor agent liveness. [...] Active "heartbeat" messages [...] [And once again: Bingo!!!] These sensors are adaptively tuned to minimize message traffic, balanced against the risk of false non-liveness detection, which would cause undesired agent restarts.
    [...]
    Once an agent is detected as dead, a genetic-algorithm based load balancer determines which host and node should restart the agent. [...] [And once again: Bingo!!!]
    As noted above, multiple restart-management agents are loaded into each enclave, where only one restart manager is active at any time. The active manager is determined by a bully style, leader-election algorithm [6 [Leader Election in Asynchronous Distributed Systems. 1999], where the active manager must renew a lease to maintain control over the enclave. The non-active managers participate in the elections, and take over the restart management responsibilities ifthe active management agent is lost. [...]"

    Comment
    Obviously, we already do have such an introspective, fault-tolerant, and self-managing management MAS for MAS with our reflective, fractal or holonic, Evoos and our field of Autonomic Computing (AC). In fact, a Holonic Agent System (HAS) approach is the ideal realization.
    See also the work about "The Mystery of the Tower Revealed: A Nonreflective Description of the Reflective Tower" discussed in the OntoLinux Further steps of the 21st of August 2010, which is related to recursive Virtual Machines (VMs) and fractal/holonic systems.
    Read the comment to the document "Model-Based Autonomous System (MBAS) or Immobile Robotic System" quoted above and the other comments related to MAS and HAS.
    We also have a heartbeat in Evoos with the pulse mentioned in the chapter 5 Zusammenfassung==Summary of The Proposal.

    The multiple management agents and the leader-election algorithm reminds us of a Byzantine resilience protocol, though it is not discussed in the quoted document.
    Indeed, the agent architectures Tok and InteRRaP, and the reflective actor-based (concurrent), (resilient) (survivable) fault-tolerant and (trustworthy) reliable, and distributed operating system Aperion (Apertos (Muse)) and TUNES OS also show why and how we seamlessly transitioned to replication, time stamping, fault tolerance, and reliability as part of a Resilient Distributed System (RDS).

    In relation to Evoos we also note the replication of (neuronal) cells and agents as a Holonic Agent System (HAS), and through MAS with time stamping and fault tolerance we already are at the Byzantine resilient Secure INtrusion-Tolerant Replication Architecture (SINTRA) and with Artificial Neural Network (ANN) and Virtual Machine (VM) we already are at the Peer-to-Peer (P2P) VM Askemos.
    And with geometrics and spatial relations we already are even at the relativistic theories of Einstein and a Theory of Everything (ToE).

    See also the comments to the documents titled "Intelligent agents: Theory and Practice" and "SIF-VW: An integrated system architecture for agents and users in virtual worlds" (Social Interaction Framework for Virtual Worlds (SIF-VW)), as well as in other explanations and clarifications we showed how the Evoose and the OS solve the problem.

    Somehow, this looks like another blunt and blatant copyright infringement of the DARPA and its contractors in relation to the Cognitive Agent Architecture (Cougaar), which is not an architecture for Cognitive Agent System (CAS) at all, and our Evoos, as one can see with this clarification easily. The details how that plagiaris works in detail and the other blah blah blah for pretending to be competent are not relevant in this case. Unbelievable that the responsible company belong to the best in technology and innovation that the U.S.America has.

    Eventually, we got the next evidence, that shows the evolution of Cougaar is based on our Evoos and OS.

    We quote a document, which is about the field of Intelligent Environment (IE) and was publicated between the 13th to 14th of December 1999: "Meeting the Computational Needs of Intelligent Environments: The Metaglue [Multi-Agent] System
    Abstract. Intelligent Environments (IEs) have specific computational properties that generally distinguish them from other computational systems. They have large numbers of hardware and software components that need to be interconnect ed. Their infrastructures tend to be highly distributed, reflecting both the distributed nature of the real world and the IEs' need for large amounts of computational power. They also tend to be highly dynamic and require reconfiguration and resource management on the fly as their components and inhabitants change, and as they adjust their operation to suit the learned preferences of their users. Because IEs generally have multimodal interfaces, they also usually have high degrees of parallelism for resolving multiple, simultaneous events. Finally, debugging IEs present unique challenges to their creators, not only because of their distributed parallelism, but also because of the difficulty of pinning down their "state" in a formal computational sense. This paper describes Metaglue, an extension to the Java programming language for building software agent systems for controlling Intelligent Environments that has been specifically designed to address these needs.
    [...] Although their precise applications, perceptual technologies, and control architectures vary a great deal from project to project, the raisons d'être of these systems are generally quite similar. They are aimed at allowing computational systems to understand people on our own terms, frequently while we are busy with activities that have never before involved computation. IEs seek to connect computational systems to the real world around them and the people who inhabit it.
    This paper presents what we believe are general computational properties and requirements for IEs, based on our experience over the past four years with the Intelligent Room Project [...]. [...] engineering these complex systems.
    [...] Metaglue, a specialized language for building systems of interactive, distributed computations, which are at the heart of so many IEs. Metaglue, an extension to the Java programming language, provides linguistic primitives that address the specific computational requirements of intelligent environments. These include the need to: interconnect and manage large numbers of disparate hardware and software components; control assemblies of interacting software agents en masse; operate in real-time; dynamically add and subtract components to a running system without interrupting its operation; change/upgrade components without taking down the system; control allocation of resources; and provide a means to capture persistent state information.
    Metaglue is necessary because traditional programming languages (such as C, Java, and Lisp) do not provide support for coping with these issues. There are currently several other research systems for creating assemblies of software agents [7,8,9], which provide low -level functionality, e.g., support for mobile agents and directory services. These features are necessary but not sufficient.

    Computational Properties of Intelligent Environments
    Intelligent Environments by and large share a number of computational properties due to commonalties in how they internally function and externally interact with their users. [...]

    Distributed, modular systems need computational glue
    [...]
    [...] there needs to be some way of expressing the "logic" of this interconnection. In other words, inter-component connections are not merely protocols, but must also contain the explicit knowledge of how to use these protocols. [...]
    [...]

    Resource management is essential Interactions among system components in an IE can be exceedingly complex. Resources, such as video displays or computational power, can be scarce and need to be shared among different applications. [...] even in an environment with ample resources for a single user, conflicts can unknowingly arise when multiple people attempt to interact with it simultaneously.
    [...]

    Configurations change dynamically
    [...]

    State is precious
    [...]
    Furthermore, IEs acquire state through interactions with users. [...]
    The most critical part of Hal's state comes from information it learns while observing its users. Hal has several machine learning systems for learning about users' preferences and activities. These systems have no straightforward way to unlearn and return to a previous coherent state. Checkpointing in the style of reliable transaction systems can partially ameliorate these problems with respect to the local state of individual components. However, when IEs are asynchronous and distributed, repeating a particular global state can be, practically speaking, impossible to achieve. (One technique we have been investigating is allowing an IE to essentially simulate itself by replaying previously observed and recorded events.)
    Thus, there is a clear need for an IE's software architecture to permit a kind of dynamism rare in conventional computational systems. We would like to stop, modify, and reload components of a running system and have them reintegrate into the overall computation with as much of their state intact as possible. [...]

    Real-time response
    [...] The parts of the system that acknowledge and react to users must be immediately responsive even if other parts of the system, for example, in the midst of processing an information retrieval query, require more time to respond.
    [...] computer vision systems, each producing several hundred dimensional data vectors at a rate of up to 30 a second, all connect to a Metaglue-based visual event classification system which must process all this data in real-time.

    Debugging is difficult
    [...] distributed, asynchronous systems [...] understanding the operation of distributed, loosely coupled components running in parallel - as does the controller for an IE - where different serializations can have different system -wide effects, is best, but rarely successfully, avoided. [...]
    [...]

    Metaglue
    [...]

    The Design
    Metaglue is an extension to the Java programming language that introduces a new Agent class. By extending this class, user-written agents can access the special Metaglue methods discussed below. Metaglue has a post-compiler, which is run over Java-compiled class files to generate new Metaglue agents. Metaglue also includes a runtime platform, called the Metaglue Virtual Machine, on which its agents run. [...]
    [...]

    The Capabilities
    Metaglue offers the following capabilities, each of which we will address in turn:
    1. Configuration management
    2. Establish and maintain the configuration each agent specifies
    3. Establish communication channels between agents
    4. Maintain agent state
    5. Introduce and modify agents in a running system
    6. Manage shared resources
    7. Event broadcasting
    8. Support for debugging
    Metaglue has a powerful naming scheme for agents [...]. We will use here the simplest form of it, the name of the Interface file of an agent, which is in the Java class package format, e.g., an agent for controlling a television might be referenced by device.Television.

    1. Configuration Management
    Metaglue has an internal SQL database for managing information about agent's modifiable parameters (called Attributes), storing their internal persistent state, and giving agents fast, powerful database access.
    [...]

    2. Agent Configurations
    Metaglue agents can specify particular requirements that the system must insure are satisfied before they are willing to run. [...]
    [...]

    3. Agent Connections
    [...]
    [...] Because agents refer to each other by their capabilities and not directly by name, new agents can easily be added to the system that implement preexisting capabilities without modifying any of the agents that will make use of them. [...]
    Metaglue will try to locate an agent that provides the requested capability on any of the system's computers' MVMs and return a reference to it to the caller. Metaglue has an internal directory called a Catalog that it uses to find agents once they are started. Metaglue agents automatically register their capabilities with the Catalog when they are run.
    [...]

    4. Agent State
    [...]

    5. Modifying a running system
    [...]
    Interestingly, the Metaglue system is itself recursively constructed out of a special set of Metaglue agents. These agents have the full functionality of the system available to them [...].

    6. Managing shared resources
    Among the largest and most complex systems in Metaglue is its resource manager.
    [...]
    [...] Metaglue has a hierarchical set of dealer agents that are responsible for distributing resources to the rest of the system. [...]

    7. Event Broadcasting
    In addition to agents making direct requests of one another through method calls, Metaglue agents can pass messages among themselves. Agents can register with other agents, includin g the Metaglue system agents, to find out about events going on in the system. [...]
    [...]

    8. Debugging
    [...]
    Metaglue also has a logging facility to manage and centralize agents' textual output. [...]
    [...]

    Discussion
    [...]

    Distributed, modular systems need computational glue
    [...] Rather than use a special communication mechanism, such as [Common Object Request Broker Architecture (]CORBA[)] or [Knowledge Query and Manipulation Language (]KQML[)], separate from the system's internal controller, Metaglue allows us to reduce the amount of infrastructure by providing for both communication and control with a much lighter -weight system.

    Resource management is essential
    [...]

    Configurations change dynamically
    [...] Metaglue's ability to start and stop age nts while leaving the rest of the system running allows us to dynamically "hotswap" components of a running computation. Finally, by substituting new resource managers into a running system, new functionality can be added that previously no agents were aware of.

    State is precious
    [...] Notions of global state, however, remain illusive concepts.

    IEs model the parallelism of the real world
    [...]

    Real-time response
    [...]

    Debugging is difficult
    [...]

    Future directions
    We are presently incorporating an expert system into Metaglue to allow more sophisticated reasoning about system configuration and resource management. We are also creating a machine learning extension to Metaglue, which will incorporate pieces of the system described in [6 [Learning Spatial Event Models from Multiple-Camera Perspectives in an Intelligent Room. In submission.]]."

    Comment
    At first, we would also like to give the information that the thesis paper "Metaglue: A Programming Language for Multi-Agent Systems" is dated 20th of January 1999 and February 1999.
    Furthermore, the quoted document was presented on the 1st International Workshop on Managing Interactions in Smart Environments (MANSE '99), which was held between the 13th to 14th of December 1999, but the first version of The Proposal describing our Evoos was publicated and discussed on the 10th of December 1999, though the matter was already discussed in many months before since around February or March 1999 .
    Obviously, a lot happened once again also at that time and exactly in the same period of time and in the fields, in which C.S. and our corporation were active. It is very long ago that we considered a happenstance in such cases.

    But Metaglue

  • is a software multimodal control system of CPS 1.0,
  • lacks a logic specification, model-based reasoning, and so on, and therefore it is not a Model-Based Autonomous System (MDAS) or Immobile Robotic System (ImRS or Immobot) of CPS 2.0, and
  • has only a set of Metaglue agents to recursively construct the MAS and therefore is not a Holonic Agent System (HAS), and so on, and
  • lacks reflective, fractal or holonic, and holologic properties, and logic specification and model-based approaches despite the "Metaglue [multi-agent] system is itself recursively constructed out of a special set of Metaglue agents".

    The introduction of reasoning by an expert system is only mentioned as a future direction in the middle of December 1999, instead of extending the whole Metaglue Multi-Agent System to an Intelligent Agent-Based System (IABS) in the many years before despite NASA immobot is also utilized as part of the Responsive Environment, which is an intelligent building control system and was developed within the Ubiquitous Computing Project.
    Also note that according to an online encyclopedia "[a]n expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and [if-then-]rules. The inference engine applies the rules to the known facts to deduce new facts."
    Eventually, rule-based reasoning is not model-based reasoning, and therefore the Metaglue system is not considered as an Immobot and also a Cognitive Agent System (CAS).

    Very interestingly for us are the

  • Machine Learning (ML) systems for learning about users' preferences and activities, but the thesis paper "Metaglue: A Programming Language for Multi-Agent Systems" only mentions the "occupant's preferences", but not the fields of Machine Learning (ML) or Computational Intelligence (CI) at all and the book titled "Managing Interactions in Smart Environments" and summarizing the MANSE '99, including this quoted document, mentions "[t]he embedding of computational intelligence into the objects of our daily lives" only in its foreword, but was publicated on the 15th of May 2000,
  • recursive construction of agents, but the thesis paper "Metaglue: A Programming Language for Multi-Agent Systems" also does not include the term recursive or a set of agents,
  • naming scheme, which reminds us of the naming scheme of the Virtual Object System (VOS), and
  • resource management, which reminds us of the CoMMA-COGs (see the quoted doccumented titled "COGs: Cognitive Architecture for Social Agents" below) and our Evoos, and
  • event broadcasting.

    In relation to ML and CI, and also recursive construction of agents we also ask the question why that happened in the same period of time, when we were conducting research and development, and creating our Evoos, and not in the years before.

    We also see the same deficits of the other technologies in the fields of Ubiquitous Computing (UbiC) and Internet of Things (IoT), and also Artificial Life (AL), Artificial Intelligence (AI), Autonomous System (AS) and Robotic System (RS), including the fields of Intelligent Agent-Based System (IABS or IAS) and MAS.

    Much more importantly, we concieved that all these technologies related to Distributed System (DS), operating system (os), DataBase Management System (DBMS), Agent-Based System (ABS), Intelligent Environment (IE or IntE), including Intelligent Virtual Environment (IVE or IntVE), and so on, will become so complex that an own ABS, MAS, or even Intelligent Agent-Based System (IABS or IAS) is required to manage them (see also the quoted document "Watching Your Own Back: Self-Managing Multi-Agent System" above).

    We also found out rather quickly what the true problems in the field of Distributed System (DS) are, including fields like MAS and solutions like Metaglue, and solved most of them by our improvements, optimizations, and creations as part of our Evoos and OS.
    For example, broadcasting and global state can be realized by our polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS) with one- or two-hop lookup performance in many cases, O(1) (constant time) hop lookup performance in most cases, and up to O(logk n) (polylogarithmic time) hop lookup performance in cases of hotspot regions with churn-intensive workloads, which was not recognized until we explained this original and unique property of our OS in all details in the

  • Ontologic Net Further steps of the 18th of February 2019,
  • Ontologic Net Further steps of the 20th of February 2019,
  • Ontologic Net Further steps of the 23rd of February 2019,
  • Clarification of the 23rd of February 2019,
  • OntoLix and OntoLinux Website update of the 10th of March 2019, and
  • OntoLix and OntoLinux Website update of the 12th of March 2019

    (keywords lookup and tuple space or tuplespace).

    We quote a first document, which is about the fields of Mutli-Agent System (MAS), Believable Agent System (BAS), and Intelligent Virtual Environment (IVE or IntVE), and was publicated in May 1992: "Integrating Reactivity, Goals, and Emotion in a Broad Agent
    [...]

    Broad Agents
    The Oz project [3 [Virtual reality, art, and entertainment. 1992]] [...] is developing technology for artistically interesting, highly interactive, simulated worlds. We want to give users the experience of living in (not merely watching) dramatically rich worlds that include moderately competent, emotional agents.
    An Oz world has four primary components. There is a simulated physical environment, a set of automated agents which help populate the world, a user interface to allow one or more people to participate in the world [13 [Integrated natural language generation systems. April 1992], and a two-player adversary search planner concerned with the long term structure of the user's experience [2]. Oz shares some goals with traditional story generation systems [18 [The Metanovel: Writing Stories by Computer. 1976], 16 [Story-telling as planning and learning. 1976], but adds the significant requirement of rich interactivity.
    One of the keys to an artistically engaging experience is for the user to be able to "suspend disbelief". That is, the user must be able to imagine that the world portrayed is real, without being jarred out of this belief by the world's behavior. The automated agents, in particular, mustn't be blatantly unreal. Thus, part of our effort is aimed at producing agents with a broad set of capabilities, including goal-directed reactive behavior, emotional state and behavior, and some natural language abilities. For our purpose, each of these capacities may be as shallow as necessary to allow us to build broad, integrated agents [4 [ Broad agents. March 1991]].
    Oz worlds are far simpler than the real world, but they must retain sufficient complexity to serve as interesting artistic vehicles. The complexity level is somewhat higher, but not exceptionally higher, than typical AI micro-worlds. [...] We suspect that some of our experience with broad agents in Oz may transfer to other domains, such as social, real-world robots.
    Building broad agents is a little studied area. Much work has been done on building reactive systems [6 [Intelligence without representation. 1987, subsumption architecture for RS], 10, 9, 24], natural language systems, and even emotion systems [8 [In-Depth Understanding. 1983], 23 [The Cognitive Structure of Emotions. 1988], 21 [Daydreaming in Humans and Machines. 1990]]. There is growing interest in integrating action and learning (see [14 [Proceedings of AAAI Spring Symposium on Integrated Intelligent Architectures. March 1991]]) and some very interesting work on broader integration [25 [A basic agent. Computational Intelligence. 1990], 22 [[Soar] Unified Theories of Cognition. 1990]]. [...]

    Tok and Lyotard
    In analyzing our task domain, we concluded that the capabilities needed in our initial agents are perception, reactivity, goal-directed behavior, emotion, social behavior, natural language analysis, and natural language generation. Our agent architecture, Tok, partially (but not fully) partitions these tasks into several communicating components. Low-level perception is handled by the Sensory Routines and the Integrated Sense Model. Reactivity and goal-directed behavior are handled by Hap [17 [Hap: A reactive, adaptive architecture for agents. 1991]]. Emotion and social relationships are the domain of Em [? [Building emotional agents. May 1992]]. Language analysis and generation are performed by Gump and Glinda [12, 13 [Integrated natural language generation systems. April 1992]]. [...]

    The Simulated World and Perception
    The Oz physical world is an ob ject-oriented simulation. [...]
    Each Tok agent runs by executing a three step loop: sense, think, act. During each sense phase a snapshot of the perceivable world is sensed and the data is recorded in the sensory routines. These snapshots are time-stamped and retained. An attempt is then made to merge them into the Integrated Sense Model (ISM), which maintains the agent's best guess about the physical structure of the whole world. The continuously updated information in the sensory routines and the longer term, approximate model in the ISM are routinely queried when choosing actions or updating the emotional state of Lyotard.

    Action (Hap)
    Hap is Tok's goal-directed, reactive action engine [17]. . It continuously chooses the agent's next action based on perception, current goals, emotional state, behavioral features and other aspects of internal state. Goals in Hap contain an atomic name and a set of parameters which are instantiated when the goal becomes active [...].
    Hap stores all active goals and plans in a hierarchical structure called the active plan tree (APT). There are various annotations in the APT to support reactivity and the management of multiple goals. Two important annotations are context conditions and success tests. Both of these are arbitrary testable expressions over the perceived state of the world and other aspects of internal state. [...]
    Hap executes by first modifying the APT based on changes in the world. Goals whose success test is true and plans whose context condition is false are removed along with any subordinate subgoals or plans. Next one of the leaf goals is chosen. [...] At this point the execution loop repeats.

    Emotion (Em)
    Em models selected emotional and social aspects of the agent. It is based on [...] 23 [The Cognitive Structure of Emotions. 1988]]. Like that work, Em develops emotions from a cognitive base: external events are compared with goals, actions are compared with standards, and ob jects are compared with attitudes. [...]
    [...]
    Some emotions are combinations of other emotions. [...]
    Finally, love and hate arise from noticing ob jects toward which the agent has positive or negative attitudes. In Lyotard we use attitudes to model the human-cat social relationship. Lyotard initially dislikes the user, a negative attitude, and this attitude varies as the user does things to make Lyotard angry or grateful. As this attitude changes, so may his resulting love or hate emotions.
    Emotions fade with time, but attitudes and standards are fairly stable. An agent will feel love when close to someone liked. This fades if the other agent leaves, but the attitude toward that agent remains relatively stable.

    Behavioral Features
    [...] It became clear that Lyotard's emotion-related behavior depended on an abstraction of the emotional state.

    The abstraction, called \behavioral features", consists of a set of named features that modulate the activity of Hap. Features are adjusted by Hap or Em to control how Hap achieves its goals. Em adjusts the features to express emotional influences on behavior. It continuously evaluates a set of functions that determine certain features based on the agent's emotional state. Hap modifies the features when it wants to force a style of action. [...]
    Features may influence several aspects of Hap's execution. They may trigger demons that create new top-level goals. They may occur in the preconditions, success tests, and context conditions of plans, and so in uence how Hap chooses to achieve its goals. Finally, they may affect the precise style in which an action is performed.

    Discussion of Tok and Related Work
    [...]

    Emotion, Explicit Goals, and World Models
    [...]
    It is essential that Oz agents be reactive [...]. [...]
    Once we accepted the importance of reactivity and grounding in sensory inputs, which was forced upon us by facing our task squarely, it was not difficult to develop an architecture that represented goals explicitly while retaining reactivity. [...]
    Thus, we suggest that robust, reactive behavior is not diminished by the presence of explicit goals in an agent, but by the attempt to model the agent's choice of action as a planning process over characterizations of the world. Our view of goals allows us to avoid many of the unpleasant consequences of trying to model the world, while preserving the strengths of goals as a mechanism for organizing action. (Though we note that it may well be possible to combine these views in \plan-and-compile" architectures [20, 19], of which Soar is a particularly rich example [15 [Integrating planning, execution, and learning in soar for external environments. 1990].)

    Mixing Independent Behaviors
    As we have used the word, a behavior is a cluster of related goals and plans that produces some recognizable, internally coherent pattern of action. A behavior is often represented by a single high-level goal.
    [...]
    The context conditions and success tests were developed to make each behavior robust in the face of changes in the world, be they unexpected failures or serendipitous success. We expected these surprises to be due to external events performed by other agents or unforeseen complexities in the physical nature of the world. However, it has turned out that the agent's own actions, performed by other interleaved behaviors, are one of the main causes of unexpected changes. The context conditions and success tests allow these independent behaviors to mix together fairly well, without much explicit design effort to consider the interactions. Thus, adding reactivity to goal-directed behavior seems to help support the production of coherent, robust overall behavior from independently executing particular behaviors.

    Modeling Personality
    [...]
    [...] With the feature to behavior mapping thus fixed, facets of a personality can be determined by the mapping from emotion to features.
    [...]

    Conclusion
    [...]
    While Tok maintains various kinds of memory, including perceptual memory, a richer learning mechanism is conspicuously absent from the architecture. There are two reasons for this. [...] To help judge this possibility, one of our colleagues is implementing Lyotard in the Soar architecture.
    We are engaged in several efforts to extend Tok. First, Gump and Glinda, our natural language components [...]. [...] We have increasingly observed similarities in the mechanisms of Hap and Glinda, and are exploring the possibilities of merging them fully.
    Second, since the Oz physical virtual world is itself a simulation of the physical world, it would be conceptually straight-forward to embed a (possibly imprecise) copy inside Tok for use as an envisionment engine. This might allow Tok, for instance, to consider possible re-orderings of steps in behaviors, and to make other decisions based on a modicum of foresight.
    [...]"

    Comment
    Now, we do know where Sony AIBO came from and that AIBO is not a robotic dog, but a robotic cat. :D

    Interesting is the fact that Tok is a reactive architecture for agents with goals.

    Also interesting is the point that the simulated environment respectively Virtual Environment (VE) of Oz uses

  • discret, geometric spatial relations,
  • world model,
  • representation in form of a graph, and
  • data with time stamp.

    Also interesting are the further developments, comprising

  • implemention of the virtual cat in the Soar architecture and
  • integration of the agent architecture Tok with the Virtual Environment (VE) Oz world, so that the virtual agent, the story telling, and the artistical performance can be realized more believably, but this does not mean that the IVE itself becomes an Intelligent Agent-Based System (IABS) in this way.

    Our Evoos is a Cognitive Agent-Based System (CABS) or simply Cognitive Agent System (CAS), and Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot), which implies it is a Cognitive Robotic System (CRS, or Cbot or Cogbot).

    "Oz worlds are far simpler than the real world, but they must retain suficient complexity to serve as interesting artistic vehicles. The complexity level is somewhat higher, but not exceptionally higher, than typical AI micro-worlds."
    But what is required is a believable bidirectional mirror world respectively magic mirror world.

    We quote a second document, which is about the fields of Mutli-Agent System (MAS), Believable Agent System (BAS), and Intelligent Virtual Environment (IVE or IntVE), and was publicated in May 1994: "Synergistic Capabilities in Believable Agents
    [..]
    The Oz project at CMU is developing technologies that will allow artists to create dramatic, interactive environments [1 [Virtual reality, art, and entertainment. March 1994]. One important aspect of such environments is the existence of believable agents. [...] [2 [Broad agents. March 1991]. In our current architecture, called Tok, these capabilities include: goal-directed behavior, reactivity to the environment, emotions, social skills, and perception [3 [Integrating reactivity, goals, and emotion in a broad agent. July 1992]. We are also exploring the integration of natural language generation and understanding.
    [...]

    [...]
    Many of Tok's capabilities interact with each other in a synergistic manner. For instance, an agent with natural language generation is going to be more believable than one without, but if the agent has a broad range of other capabilities, language generation adds even more believability. [...]
    [...] Goal-based action and social skills also combine in synergistic relationships with emotion. [...]

    [...]
    The emotions in Tok agents are controlled by the Em system [7 [Building emotional agents. May 1992]]. Em generates emotions based on ideas from the cognitive emotion theory of Ortony, Clore, and Collins [6 [The Cognitive Structure of Emotions. 1988]]. Em has a set of emotion generation rules that fire when certain internal or external states or events occur. A number of Em's emotion generation rules are based on the processing of Tok's action architecture, called Hap [4 [Hap: A reactive, adaptive architecture for agents. June 1991], 5[ Realtime control of animated broad agents. June 1993]].
    Hap is a goal-based reactive architecture that processes the goals, plans, and actions in Tok agents. [...]
    [...] emotional agents are more believable if they express their emotions through their goals, plans, and actions. At the goal level, Em can affect what goals an agent has and what priority they are given. [...] Emotions may also affect what plan an agent chooses to use in pursuit of a goal. [...]
    [...] Tok agents, however, because of the integration of emotion and action, not only produce emotions and actions, they produce also produce something else: emotional actions.
    [...] adding memory allows an agent to recall past emotional experiences. Also, adding a language system allows the agent to use expressive speech, both in what is said and how it is said. [...]

    [...]
    There are (at least) two ways that emotions and social skills combine to make Tok agents more believable. [...]
    [...] Many causes of emotion in ourselves and in interesting agents from literature arise because of others. [...] Without social knowledge and relationships with other agents, Em would have a large gap in the types of emotions that it could generate.
    [...] the dynamics of social relationships often depend on emotions. [...] Tok agents could enter into social relationships with other agents without having emotions, but by integrating these two capabilities these agents have much richer relationships that can change over time based on emotional factors.

    [...]
    [...] capabilities can combine in a synergistic manner to create new behaviors and more believable agents. [...]

    Comment

    For better understanding of the fields and topics and the related quotes and comments, we quote the test homepage of the Equator project, which was publicated on the 17th of August 2000: "The central goal of the Equator IRC is to promote the integration of the physical with the digital. In particular, we are concerned with uncovering and supporting the variety of possible relationships between physical and digital worlds. Our objective in doing this is to improve the quality of everyday life by building and adapting technologies for a range of user groups and application domains. Examples include:

  • combining physical and digital cities to promote people's understanding of the world within which they live, and to enhance wayfinding and access to physical and digital artefacts, information and people.
  • creating new forms of play, performance and entertainment that combine the physical and digital so as to promote learning, participation and creativity.
  • exploring how new technologies that merge the physical and the digital can support activities outside of the workplace, including maintaining family and social relationships in the home, and supporting work in the open air.

    Meeting this objective will require us to address fundamental and long-term research challenges. We will conduct research into new classes of device that link the physical and the digital, including embedded devices that are integrated into physical environments, information appliances that combine computing functionality with purpose designed physical objects, and wearable devices that are carried on the person. In turn, these activities will be supported by fundamental research into adaptive software architectures that can knit together heterogeneous collections of such devices, as well as new design and evaluation methods that draw together approaches from social science, cognitive science and art and design.
    Equator runs for 6 years and pools multidisciplinary expertise from research domains including collaborative virtual environments, cooperative work, ethnography, design, distributed systems, mobile and wearable computing, and social science. [...]
    At the moment we are recruiting. If you are thinking of doing a PhD or are a post-doc researcher seeking to conduct challenging and intruiging research then contact us for more information.
    We also seek interested parties from industry. If you are part of an organisation who is interested in the work of Equator, or would just like more information, please contact us."

    Comment
    Obviously, it was a research project, which just started around August 2000 and ended in 2006 or 2007.
    Eventually, it was about multimedia systems at first interpreted as Augmented Reality (AR) and Mixed Reality (MR), and including AR and MR.

    For sure, by already having the features of the fields of

  • Intelligent Agent System,
  • animated VE, Intelligent Virtual Environment (IVE or IntVE), believable agent, simulation of emotional intelligent agent and affection,
  • immobile Robot,
  • Swarm Intelligence (SI), Multi-Agent System (MAS), and self-organization,
  • Multimodal User Interface (MUI), Multi-Agent System (MAS), and self-organization,

    in the Evolutionary operating system(Evoos) (kernel space), all applications (user space) can utilize the features and functionalities.

    We also list some references of the thesis, which is titled Spatial Computing and dated 16th of May 2003:

  • A survey of Augmented Reality, in Presence: Teleoperators and Virtual Environments. 1997
  • Collaborative Augmented Reality. 2002 [(based on AR Toolkit)]
  • The Relationship Between Matter and Life. 2001
  • Living in Augmented Reality: Ubiquitous Media and Reactive Environments. 1997
  • A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment. 1997
  • Situated information spaces and spatially aware palmtop computers. 1993
  • How We Became Posthuman. 1999
  • Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. 1997
  • The Art and Science of Synthetic Character Design. 1999
  • Regeneration of Real Objects in the Real World. 2002 [(based on AR Toolkit)]
  • A Taxonomy of Real and Virtual World Display Integration, in Mixed Reality. 1999 [(first variant of Reality-Virtuality-Continuum (RVC))]
  • The Invisible Computer. 1998
  • The Magic Carpet: Physical Sensing for Immersive Environments. 1997
  • The Brain Opera Technology: New Instruments and Gestural Sensors for Musical Interaction and Performance. 1999
  • Unifying Augmented Reality and Virtual Reality User Interfaces. 2002 [no reference in the text]
  • The Magnifying Glass Approach to Augmented Reality Systems, International Conference on Artificial Reality and Tele-Existence '95 / Conference on Virtual Reality Software and Technology (ICAT/VRST '95). 1995
  • Cospace: Combining Web-Browsing and Dynamically Generated 3D Multiuser Environments. 1999
  • A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. 1987
  • Boom Chameleon: Simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display. 2002
  • Cybernetics: Or Control & Communication in the Animal and the Machine. 1948
  • www.ubiq.com/hypertext/weiser/UbiHome.html . 1988

    We quote a report, which is about the field of Holonic Agent System (HAS) and IVE, and was publicated in a magazine in October 1999: "COGs: Cognitive Architecture for Social Agents
    The COMMA-COGs project (Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents) is part of the Multi-Agent Systems group's larger aim to develop integrated architectures for multi-agent systems. In this larger context we hope to give an account of motivation in integrated agent architectures that exploits relevant research in cognitive science.
    A new branch of computer applications is opening up based on virtual animated worlds, where human users, their software representatives [(avatars)] and a host of services co-exist in a networked environment. This promises to be a major area for the application of agent-oriented techniques. However, these applications demand multi-agent systems that are more flexible than current systems and integrate a wide variety of functionality in a dynamic fashion. The key to achieving this integration is to take seriously the notion of bounded rationality and develop an architecture which manages the resources of an agent consistent with changes in the [networked] environment. It should also share resources out to sub-components of the agent according to the way they contribute to the agent's overall goals. The agent society, correspondingly, must be structured to take into account the resource managing behaviour of the individual agents.
    The COGs project has three major technical goals:

  • COGs focuses on systems where several agents interact with one another. This raises the issue of how resources are used within a group of agents. COGs is, therefore, investigating cooperation protocols that embody fair mechanisms for sharing resources and is incorporating constructs into the architecture that will support the dynamic specification, monitoring and policing of agent resource consumption.
  • COGs is developing a framework for self-organisation in agent societies, which allows individual agents to aggregate dynamically with other agents to form a larger entity which itself represents an agent to the outside environment (a holon). In particular, it is looking at the impact of social laws on the process of self-organisation and investigating how particular organisational structures maximise resource utilisation.
  • COGs is developing a resource-aware agent architecture that allows agents to flexibly manage their resources, whilst pursuing multiple goals in an unpredictable environment. It has produced a general model of abstract resources and introduced constructs into the architecture that monitor resource usage and help to adapt it to a changing environment. Particular attention has been given to the way this affects the flow of control within our current agent architecture and on how [the] resulting architecture [match] with models documented in the Cognitive Science literature.

    Realising Motivated Agents
    The resource-oriented and holonic constructs are being added to an architecture whose individual components have a clear, logical characterisation and which uses logic-based planning techniques to derive future courses of action. The architecture is based on the INTERRAP model, which has three layers. In this view the algorithm defining the behaviour of an agent can be captured with the following equation:
    algorithm = logic + motivation
    This interprets the original equation by Kowalski by relabelling the control component as motivation. By this we mean to indicate that the behaviour of an agent is determined by how the logical specification is used to derive a collection of concurrent sub-processes within an INTERRAP layer, and how the concurrent layers interact, downwards through activation and inhibition and upwards through interrupts.
    We use the term motivation to describe this form of control because we are looking a models of motivation from biology and cognitive science as a means to redesign the architecture. A guiding engineering principle behind the redesign is that the resulting behaviour should exhibit bounded rationality: that is to say that given a particular machine to run the algorithm and a utility measure to describe the value of resource usage, the combination of logic and motivation in the above equation makes optimal use of resources for a particular class of environment.
    To describe the behaviour of a group of agents we have to indicate how the social structure imposes additional constraints on both the logic and control components of the individual agents and how the protocols for agent interaction allow one agent to influence the behaviour of another.

    Resource-Oriented Control
    Part of the formalisation of the agent model has been to represent the interaction between layers as a form of meta-level control. This extends naturally to the resource-oriented control constructs we are introducing. In this context, each layer consists of concurrent modules that compete for resources that may be concrete, such as the football in our virtual soccer application, or more abstract, such as whether to attack or defend. A control module for each layer keeps book on the behaviour of the modules of that layer and adjusts the allocation of resources to modules based on how useful the module has been in the past. However, the ability to reason about resource consumption on each layer is limited. It is the layer above that reasons about major strategic changes in resource configuration in the layer below and exerts meta-level control by directing the control module in the layer below[13 [Layered, resource-adapting agents in the robocup simulation. 1999. to appear.]].

    Holons: Recursive Agent Structures
    One goal of COGs has been to investigate the novel holonic paradigm of multi-agent programming [9 [Flexible autonomy in holonic multi-agent systems. 1999], where agents surrender some of their autonomy and merge to form a "super-agent" (or holon), that acts, when viewed externally, as a single agent.
    We have provided a formal definition of a holon, given a classification of possible application domains, and have developed an algebraic characterisation of the merge operation. Putting this into practice, we have developed algorithms for holon formation and on-line re-configuration. We have also begun to examine how resource contention between agents can be managed within the holonic paradigm[8 [Resource management for boundedly optimal agent societies. 1998]].

    Implementation Language
    The main vehicle for implementing the redesigned architecture is the programming language Oz, in particular its latest incarnation [...]. This has proven particularly useful for the following reasons:

  • The clear semantics of the Oz language allows us to implement the logical specification of the system in a direct way.
  • The control offered by the language, that is its concurrency, data-flow constructs and interrupt mechanisms, have proven a natural match for the resource-oriented control constructs with which we have been experimenting.
  • The elegant way the language incorporates disjunction, constraints and the primitives to build a variety [of] inference engines has allowed us to use logic-based planning techniques in an efficient manner.
  • Some of the application domains we are looking at are naturally distributed and can be elegantly modelled by agents that migrate between sites. [The distributed multiparadigmatic programming system] Mozart [implementing the programming language Oz] is the perfect language to implement such an architecture.

    Applications for Motivated Agents
    The results of the project are being used in a variety of application areas. The resource-oriented decomposition of systems into holons is being used for the dynamic scheduling of inter-modal transport[3 [Teletruck: A holonic fleet management system. 1998]. The resource[-]oriented control that has been added to the architecture has been tested in the RoboCup Virtual Soccer competition[11 [Experimenting with layered, resource-adapting agents in the robocup simulation. 1998]]. In addition, we have developed a distributed toolkit for simulating societies of agents. The toolkit can be use to visualise animated objects in distributed, virtual worlds[17 [An architecture for co-habited virtual worlds. 1999. to appear.]]. We are currently using the toolkit to realise a server for RoboCup Rescue, an extension of the RoboCup Soccer idea to the simulation of a rescue operation after a major urban catastrophe such as an earthquake. The rescuers in this simulation are being implemented as agents incorporating our resource-oriented constructs.
    In order to test some of our ideas on the relationship between resource[-]oriented control and models of emotion and motivation, we are using an agent architecture to drive a lifelike character on a web page[1 [Integrating models of personality and emotions into lifelike characters. Affect in Interactions Towards a New Generation of Interfaces.1999]]. The behaviour of the agent is determined in part by a component that reasons about the emotional state of the agent. Ongoing research is looking at how the model of emotions that lies behind this component can also be used to describe the major control-related events in the COGs agent architecture.

    Comment
    First of all, we note that the quoted magazine report was created hastily and there seemed to be no more time for proof-reading. For example,

  • the authors contradict how the layers of InteRRaP interact and also made a lot of typos and some rough writing errors,
  • the dates of publication of referenced works show that they were done in the same period of time when we researched and developed, and created our Evoos, and
  • relevant works are marked as "to appear", which means we have here either what is brand new or is called a white paper.

    The underlying InteRRaP is a hybrid architecture and is characterized as a goal-driven and layered architecture derived from the BDI architecture, which was only later wrongly reinterpreted as cognitive agent.

    Holonic InteRRaP is focused on

  • Computer-Integrated Manufacturing (CIM),
  • Flexible Manufacturing System (FMS), and
  • Holonic Manufacturing System (HMS),

    but also

  • social and organisational,
  • Virtual Environment (VE) animation,
  • affect simulation and believable agent, and
  • robot simulation.

    The Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents (CoMMA-COGs) is focused on the same environment and scopes of application like for example the

  • planning of processes, specifically in the fields of manufacturing and logistics, and
  • Oz project, which is "An Architecture for Action, Emotion, and Social Behavior", and IVE {or only VE animation in a VE?}.

    It seems to be that CoMMA-COGs is also merely a renaming of Holonic InteRRaP with something related to cognition, like for example the Advanced Logistics Project (APL), which was also renamed to Cognitive Agent Architecture (Cougaar) virtually at the same time.
    But as in the case of immobot and Cougaar, CoMMA-COGs is not a cognitive (system) architecture at all, but still a goal-driven and layered agent architecture, which is derived from the BDI architecture and simulates emotional behaviour and affection, as well as social behaviour.
    This shows that we also started this trend to make Intelligent AS Cognitive AS and to make IVE MUI.
    Oz project is 1992, but CoMMA-COGs is 1999. Why 7 years later and why call it cognitive and resource-oriented? Why is there an interest in resource-oriented control as also seen with the activities of Metaglue at the same time in 1999?

    The CoMMA research included the subprojects

  • Multiagent Planning and Scheduling (MAPS) July 1995 to December 1997, and
  • Cognitive Architecture for Social Agents (COGs) January 1998 to December 2000.

    That Holonic Mutli-Agent System, which was publicated on the 12th of May 1999 and reflects our Artificial Life (AL) concept of the DNA, the stem cell, and the developmental biology for a reflective computing system, was one of the first nasty actions, and that CoMMA-COGs project was the second very nasty action of the F.R.German clique around the companies SAP, Deutsche Telekom, Volkswagen, Daimler, Airbus, universities, other research institutes, and so on to damage, steal, and destroy our original and unique works of art titled

  • Betriebssystem nach evolutionären und genetischen Aspekten, aka. Evoos and publicated on the 10th of December 1999, and
  • Ontologic System, aka. OS and publicated at the end of October 2006,

    as the same group with their booths did with other fraudulent works since 2000, like for example the

  • SmartKom and EMBASSI projects
    • Deep Map and Talking Map in 2000 to 2002,
    • SmartKom: Multimodal Communication with a Life-Like Character in 2001,
    • SmartKom: Symmetric Multimodality in an Adaptive and Reusable Dialogue Shell based on "a [distributed] multi-blackboard platform with ontology-based messaging" "based on [the Parallel Virtual Machine (]PVM[)]" with "publish/subscribe messaging on top of PVM" in 2003,
    • SmartKom Mobile Multi-Modal Dialogue System in 2002,
    • SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies) in 2006,
  • SmartGuide An Intelligent Information System basing on Semantic Web Standards 2002,
  • SmartWeb project, which is the follow-up project to SmartKom
    • SmartWeb: Mobile Applications of the Semantic Web 20th of September 2004 based on symmetric MUI and
    • SmartWeb Handheld: Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services 6th of January 2007 (?)based on SmartGuide,
  • OntoAgent: A Platform for the Declarative Specification of Agents. In RDF, RDF Schema, and RuleML. in 2002 ontology-based software agent or softbot OntoAgent,
  • ontology-based and RDF, RDF Schema, and RuleML to Java/SQL based object databases and inference engines cross-compiler OntoJava in 2004, and
  • field of Industry 5.0 (Industry 4.0 and Ontoverse (Ov), specifically Mixed Reality (MR), including Augmented Reality (AR)) in 2011

    in addition to the collaborations with other universities and research institutions worldwide.

    On the one hand, they have spied out and observed our research and development, and process of creation,

  • designation of cognitive architecture,
  • algorithmic (compare with complexity and Algorithmic Information Theory (AIT)),
  • motivation (compare with the title of the chapter 1.1 Motivation of The Proposal),
  • logic system specification, {?correct or only expert system} Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),
  • more flexible than current systems (see chapters 2.6 Negative Eigenschaften von Betriebssystemen== Negative Properties of Operating Systems and 2.7.3 Flexible Grundlagen==Flexible Foundations),
  • adapting to unpredictable environment (compare with chapters 2.6 Negative Eigenschaften von Betriebssystemen == Negative Properties of Operating Systems and 2.7 Neue Anforderungen an Betriebssysteme aus der Sicht der Software-Technologie==New Requirements for Operating systems from the Perspective of Software Technology),
  • meta-layers (compare with chapter 2.7.4 Metaschichtenarchitektur und Reflektion==Meta-layer Architecture and Reflection)
  • self-organization (compare with chapter 3 Entwicklung und Funktionsweise eines Gehirns==Functioning of a Brain),
  • bounded rationality (compare with the chapter 5 Zusammenfassung==Summary (keyword intellectual properties)), and also
  • methodological comparison of prior art (compare with chapter ).

    That is no conflict of interests. For sure, we looked at the original agent architecture InteRRaP and also the programming system Mozart-Oz at that time, but we did not know about the existence of the CoMMA-COGs extension of InteRRaP on the basis of Mozart-Oz until December 2006 at all, and would be utter nonsense to give Professor Doctor W. Banzhaf or anybody else at his faculty an obvious plagiarism as proposal for a diploma thesis, but we do know that holding members of that research company already knew C.S. since 1998, but we did not know that they have established that research company for stealing others Intellectual Properties (IPs) and had those connections to other entities worldwide.

    On the other hand, they also have functions, like for example

  • resource controlling,
  • resource managing,
  • resource sharing,
  • fair mechanisms for sharing resources,
  • dynamic specification, monitoring, and policing of resource consumption,
  • control-related events,
  • interrupts,
  • dynamic scheduling,
  • and so on,

    which are also related to the field Intelligent Environment (IE) (see for example the quoted document "Meeting the Computational Needs of Intelligent Environments: The Metaglue System"), Robotic System (RS), and so on, but also to the field of operating system (os) (see for example the chapter 2.1 Ziele eines Betriebssystems==Goals of an Operating system of The Proposal).

    Somehow, we have the impression that when we talked about operating system, then they simply talked about agent system respectively they simply substituted the term operating with the term agent.
    This also reflects our finding at that time, that everybody else thought to put this functionality into the kernel space or operating system layer is an illusive concept, so that they did it only in the user space or application layer, and the middleware layer.
    IABS, IVE, including CoMMA-COGs, are only on the application layer and no Virtual Machine (VM) with the exception of ABS based on the programming languages Java and Mozart-Oz, which are not related to operating system-level virtualization or containerization.
    Of course, some months later around April 2000 followed the attempt to steal the rest of our Evoos: "An Open Architecture for Holonic Cooperation and Autonomy
    [...]

    Holonic Kernel
    The holonic kernel (HK) is a layered framework of IEC 61499 function blocks. A holonic kernel resides on each holonic resource and facilitates holon management through the provision of suitable services. We assume that a holon is dynamically created as a heterarchy of function blocks upon one or more holonic resource(s). This formation must satisfy the requirements demanded by that holon's autonomy, cooperation and openness roles. Contrary to traditional manufacturing paradigms, holons are managed in a distributed fashion through interaction with their respective holonic kernels.
    [...]
    References
    [...]
    [2] "Draft - Publicly Available Specification - IEC 61499: Function Blocks for Industrial-Process Measurement and Control Systems, Part 1 - Architecture, Part 2 - Engineering Task Support," International Electro­technical Commission, Geneva, April 2000."

    Too late, because our Evoos is integrating IAS, CAS, Cognitive-Affective Personality or Processing System (CAPS), MBAS or ImRS, MobRS, CRS, and the reflective, object-oriented, actor-based (concurrent), (resilient) (survivable) fault-tolerant, and distributed operating system TUNES OS based on the (frame-based) Arrow System, and in this way also the very similar reflective actor-based (concurrent), (resilient) (survivable) fault-tolerant and (trustworthy) reliable, and distributed operating system Aperion (Apertos (Muse)) and Cognac based on Apertos, and they were not able to copy and steal all at one time (see also the other quotes, specifically the ones related to Cougaar and PAL, and also Agent Chameleons).

    But as in the case of the holonic kernel, CoMMA-COGs is not an operating system kernel. In the document titled "Holonic Multi-Agent System" and publicated on the 12th of May 1999, and other documents related to InteRRaP we could not find the term interrupt at all (so far), with the only exception of the document titled "Agent-Based Design of Holonic Manufacturing Systems", which is about fractal or holonic models for a planning and control architecture of a Computer-Integrated Manufacturing (CIM) system and their implementation on the basis of the software agent architecture InteRRaP, which again is called "interrupt agent architecture", which again looks like a deliberately made writing error somehow, which reveals a certain thinking of the authors.

    The authors of the document "Multi-agent Systems as Intelligent Virtual Environments" quoted below support our point of view, that CoMMA-COGs is not related to the fields of

  • operating system (os),
  • Autonomic Computing (AC),
  • Robotic Automation (RA),
  • Mixed Reality (MR), including Augmented Reality (AR) and Augmented Virtuality (AV),
  • Emotive Computing (EmoC) and Affective Computing (AffC), or Emotional Intelligence (EI),
  • user reflection,
  • etc..

    Similar fraudulent projects and works of the U.S.American clique around the DARPA, NASA, SRI, MIT, UC, etc. based on Evoos and OS include for example the

  • Cognitive Agent Architecture (Cougaar)
    • MicroEdition based on immobot and Sensor Network (SN)
    • Self-Managing Multi-Agent System
    • Semantic (World Wide) Web, Intelligent Agent-Based System (IBAS), and Grid Computing (GC), respectively semantic grid and cognitive grid

    and

  • Personalized Assistant that Learns (PAL)
    • Reflective Agents with Distributed Adaptive Reasoning (RADAR)
    • Cognitive Assistant that Learns and Organizes (CALO)
    • Multimodal Interfaces for Cell Phones and Mobile Technology

    But now we can see that focusing on the underlying Information and Communications Technology (ICT), for example the hardware and the software, specifically the operating system, and the computing and the networking, has become even more important and decisive.
    In additiion, reflection of the user and the system themselves has become as well important and decisive, for example cognitive prosthetic or cybernetic augmentation and extension.

    At first, we also wanted to use the programming language and runtime environment Mozart-Oz, but then we thought about an ontology-based Java programming language and runtime environment, including a VM with integrated programmable inference engine, but not cross-compilers for RDF, RDF Schema, RuleML, and OWL to Java and SQL like OntoJava and OntoSQL, and eventually we generalized the whole programming language with a Virtual Virtual Machine (VVM) approach and Ontologic Programming (OP). With Ontologics we do not need a cross-compile at all anymore.
    A large part of this research and development was stolen with the programming language C#, which directly followed the fraudulent actions in relation to the field of microService-Oriented Architecture (mSOA).

    We have seen the same fraudulent actions in relation to the fields of microService-Oriented Architecture (mSOA) and Resource-Oriented Computing (ROC), which we already could relate to our Evoos.

    But we already said that the claim of others in relation to the invention of ROC is related to the reflective distributed operating systems Aperion (Apertos (Muse)) and TUNES OS, and Evoos.
    But while CoMMA-COGs presented a stolen part respectively properties of our Evoos, it also provided a part respectively evidences, which finally proves our claims in relation to mSOA and ROC, as well as os-level virtualization or containerization.
    Through the match with CoMMA-COGs we got another evidence and finally the proof of being the creator of ROC and becauce ROC is closesly related to mSOA we also got another evidence and finally the proof that mSOA was introduced with Evoos and because mSOA is closely related to operating system-level virtualization or containerization we also got another evidence and finally the proof that containerization was introduced with Evoos.

    Furthermore, we also got another evidence and finally the proof of having holon in Evoos, because it is reflective, fractal or holonic, and holologic, and also self-organization, and hence another evidence and finally the proof of having MAS in Evoos, because a holon is a type of MAS.
    This is relevant in relation to other technologies, such as for example Cougaar (see also the quoted document "Watching Your Own Back: Self-Managing Multi-Agent System" above).

    Furthermore, we also got another evidence and finally the proof of having IVE in Evoos.
    This is relevant in relation to other technologies, such as for example Agent Chameleonss, including NEXUS (see also the related quoted documents "" below).

    The reflective and distributed operating systems Aperion (Apertos (Muse)) and TUNES OS have meta-levels. Evoos added for example the

  • fractal or holonic, and holologic basic properties,
  • IAS and immobot for (the meta-levels of) an os itself,
  • IAS for the interaction between meta-layers and
  • CAS and CRS for both the meta-levels and the interaction between them,

    which are not the environments and resources focused on by InteRRaP and CoMMA-COGs, because an Agent-Based System (ABS), an Agent Environment (AE), including an IE and IVE, and an os are not considered as Agent-Based System (ABS).
    In this way and as a side effect, we also created the fields of

  • Resource-Oriented Computing (ROC),
  • Autonomic Computing (AC), and
  • Robotic Automation (RA),

    and a lot more.

    We also see once again no ROC, AC, RA, etc. with these other technologies.

    IAS, IE, and IVE, immobot, CAS, and CRS in general and CoMMA-COGs in particular have no (relation to)

  • ontology in contrast to the initial version of Evoos in 1999 and
  • Mixed Reality (MR), including Augmented Reality (AR) and Augmented Virtuality (AV) in contrast to the updated version of Evoos with the CVE VOS of 2002.

    These are more evidences, which show that we are the creators and pioneers, and were already far ahead in these fields of

  • ontology-based software agent or softbot,
  • intelligent agent (OntoAgent), MBAS or immobot, holon, MAS, CPS, etc..

    We also noted once again that only agents should be believable, but not the underlying Agent-Based System (ABS) and os.
    IAS, IVE, immobot, CAS, and CRS in general and CoMMA-COGs in particular have no resilience, including fault tolerance and trustworthiness (e.g. reliability, availability, safety, security, performance (Quality of Service (QoS)), etc.), in contrast to Evoos and its integration of MAS.

    In relation to Holonic Manufacturing System (HMS) we already mention an optimization cycle of a production line, which is related to the Deming cycle (see also the quoted document in the Clarification of the 18th of July 2021).

    It also provides further evidences that

  • Evoos and OS are truly original and unique, and
  • everything required for the implementation of Evoos and OS truly existed at that time already.

    By the way: It should be obvious that C.S. discussed Evoos respectively The Proposal at the department of Professor Doctor Wolfgang Banzhaf, when he gave the special lecture in Genetic Programming (GP) at the university the first time and we do have evidences and witnesses.
    We even had the impression that he and another scientist of the field of formal verification, specifically model checking with Spin, Professor Doctor Stefan Edelkamp, were chartered to give special lectures at that university due to the secret activities of C.S. and our corporation at that time (e.g. SoftBionics (SB), Evoos, and optimization and verification of the operating system microkernel L4).

    For better understanding, we quote a document, which is about the fields of agent simulation and Virtual Environment (VE), the initial or first version of the Social Interaction Framework (SIF), and was publicated in February 1999: "SIF - The Social Interaction Framework
    System Description and User's Guide to a Multi-Agent System Testbed

    Abstract
    We present the Social Interaction Framework SIF and demonstrate how it can be used for social simulation. SIF is a simulation testbed for multi-agent systems. The key design aspects are the ability of rapid-prototyping, a broad implementation platform, the possibility of controlling agents by human users and easy access to the internal data of every agent in the simulation. SIF implements the EMS (Effector-Medium-Sensor) paradigm, which provides a generic agent-world interface.
    In this document we describe the architecture, example applications that have been developed at DFKI and we give an easy to follow ten step guide for creating simulations with SIF.

    Introduction
    [...]
    The Social Interaction Framework (SIF) provides a virtual testbed for evaluation and an environment for development of Multi-Agent systems. [...]
    [...]
    [Distributed Artificial Intelligence (]DAI[)] is not only providing methods to build simulations, but is also, as a science, a potential user since it requires an intuitive framework for complex, distributed simulations. For example, issues of adaptivity and scalability in multi-agent systems (MAS) cannot be investigated without the use of large-scale simulations. Using a human-oriented, realistic model of the environment allows a suitable embodiment for evaluating intelligent agents. Indeed, other sciences, such as the social sciences, are as well becoming aware of the usefulness of co-habited computer simulations. A common architectural framework for simulation is thus reasonable and must satisfy the following requirements:

  • Human User Interaction: Simulations should allow users to survey and influence the state of the simulated world. An intuitive visualisation as well as a corresponding user interface is required. From an agent's standpoint, user-controlled avatars and agents should not be distinguishable. From a user's perspective, an avatar should be semi-autonomous, i.e., its lifelike low-level behaviour should be changeable by frequent user adjustments.
  • [...]

    [...]

    Figure 6 Overview on the information flow during a simulation
    [The figure shows no integration of the Console GUI in the information flow (will become control flow in SIF-VW 2000), but instead only a uni-directional connection from the array of actions to the media and a bi-directional connection between the media and the Console GUI.]

    Figure 7 Using a control pad to control and agent
    [The figure shows an agent, which will become an avatar.]

    [...]

    Outlook
    New developments in Virtual Reality (VR) have brought a qualitative change to human-computer interaction, in the form of co-habited virtual worlds (CHVW). In such worlds, synthetic agents and avatars (agents that are controlled and supervised by human users) interact in a globally networked setting. Applications of CHVW are, for example, virtual conferences where lifelikeness and interaction of avatars is a key issue. One could also think of virtual marketplaces incorporating electronic salesmen agents and customer avatars as a platform for future e-commerce. Similarly, the entertainment issue of interactive, virtual theatre requires mixed populations of synthetic characters and half-human, half-computationally steered personae. DAI technology; such as integrated into SIF; provides the key to develop the computational means of realising CHVW.
    Indeed, the role of a human agent within SIF has already been addressed by the Control Window and the Control Pad GUI. The human perceives the world through a special sensor (his browser) and acts upon the world through a special effector (the control pad). Because of the restricted bandwidth of the human-computer interface; it is convenient to let particular agents (avatars) represent the human in the simulation. The agent perceives the world on behalf of the human and acts upon that perception accordingly. Thereby; the user is able to trace the avatar's behaviour and to guide and command its avatar through his action.
    We are currently experimenting with a Web-Based User Interface to SIF which connects multiple human users remotely to a SIF simulation. It uses standard platforms; such as VRML, JAVA™; and RMI to visualise the state of the simulation and its change via threedimensional; interactive graphics and animations. The key to do so is to apply asynchronous network technology. Asynchronous visualisation lacks the guarantee of persistence; but enables to decouple the simulation from the computational operations of the clients. This is important for upholding the reproducibility of simulation results. Network technology is the key to bring together various users from remote places and on different platforms. [...]"

    We also quote and translate a document, which is about the fields of of agent simulation, Holonic Agent System (HAS), Cognitive Agent System (CAS), and Virtual Environment (VE), the updated or second version of the Social Interaction Framework (SIF), and was publicated in January 2000: "SIF-VW: An Integrated System Architecture for Agents and Users in Virtual Worlds
    Abstract
    A recent trend in research around virtual environments appears in the combination of techniques and research results from the fields of Distributed Artificial Intelligence, Distributed Interactive Simulation, and Virtual Reality into a new paradigm that we call Co-habited Virtual Worlds (CHVW). This paradigm describes the interaction of artificial agents and user-controlled avatars in a networked artificial environment. [...]

    Introduction
    Recent developments in virtual reality are set to revolutionize human-machine interaction. In Co-Habited Virtual Worlds (CHVW), autonomous agents will encounter programs controlled by human users - so-called avatars - by means of which the user on the one hand has access to such a world and on the other hand is represented as adequately as possible in the world. Due to the increasing networking, such a virtual world can be realized in a global network, such as the Internet. Applications of CHVW are for example in virtual conferences, or in the area of the constantly growing electronic commerce. Here, the use of avatars as well as autonomous agents is conceivable. The entertainment industry is probably the largest application area: virtual worlds will be brought into living rooms in a playful way. An efficient CHVW platform must meet the following requirements:

  • Multi-user operation: A CHVW platform must be able to provide access to the virtual world for multiple human users. This requires intuitive user interfaces and visualizations. From a user's perspective, autonomous agents should be indistinguishable from avatars of other users. Avatars should have a certain degree of autonomy, i.e., if possible, the user should only give guidelines to the avatar, which the avatar then implements appropriately.
  • Universal Platform: [...]
  • Autonomy and transparent distribution: A popular definition describes an agent as an entity that perceives its environment by means of sensors and acts in this environment by means of effectors [...]. Agents are thus autonomous by definition, i.e. they are free in their actions (within reason, of course) and, in particular, their internal state is influenced only by their perception and not by direct memory manipulation from the outside [...].
  • Fine-grained actions and perceptions: [...]
  • "Rapid Prototyping": [...]

    [...] SIF-VW (Social Interaction Framework for Virtual Worlds) [...] SIF-VW is based on the so-called Effector-Medium-Sensor (EMS)-architeceture [...] Among other things, we look at an explicit modeling of trust between individual agents. Furthermore, we briefly present the research goals and first results of the project CoMMA-Cogs [...].

    The EMS Architecture
    The goal of the Effector-Medium-Sensor architecture [The EMS model. 1998] is the software implementation of the already mentioned agent definition [Artificial Intelligence, A Modern Approach. 1996]. The basic idea of the EMS architecture is strongly oriented towards a natural mapping of the processes in the "real", i.e. the physical world that surrounds us. An example of this is verbal communication [...].
    [...] The term "medium" is used in this sense as an abstraction for any kind of information transport. In addition to the medium for verbal communication presented here as an example, any number of other media are conceivable, for example for physical interaction between agents or between agents and objects in the virtual world.
    The central task of any medium is to encapsulate a data model for certain aspects of the virtual environment. Manipulation of the data model is accomplished through transition rules implemented in the medium that control the transition from one world state to another. These state transitions are usually triggered by actions of the agents, but there is also the possibility of internally triggered transitions, e.g., for time-dependent events.
    Each agent has various effectors and sensors with which it can change or register the state of the medium [...].
    After a successful registration has taken place, the in each case newly added agent is integrated into the [...] represented control cycle. Now it can use effectors to perform actions that manipulate the medium and receive information about its environment via sensors. [...] This information is transmitted by the medium in the form of percepts which are picked up by a sensor of the sensor and forwarded to the agent. to the agent.
    In addition to managing the world model, a medium also has the task of decoupling agent control flows to enable pseudo-parallel agent execution behavior.
    [...]

    Figure 3: Control flow
    [The figure shows an integration of the GUI in the control flow (was information flow in Figure 6 of SIF 1999) going from the array of actions to the media, from the media to the GUI, and from the GUI to the array of actions in a circle or loop, but no bi-directional connection between the media and the GUI.]

    Integration of users as avatars
    [...] Besides the subjective views of the agents, however, there is an additional user view in SIF. This view provides a two-dimensional, JAVA-based or three-dimensional VRML-based visualization of the overall system, which can be used by the user for monitoring and control. In addition to the global view, the user can also enter into the local view of an agent at any time and control the respective agent by means of special command percepts; the agent then acts as an avatar. The indirect control of such an avatar [...] enables the agents to react partially autonomously to their environment and at the same time to accept commands from the user that influence the overall behavior of the agent. The communication between user and avatar takes place exclusively via the command medium.

    Figure 4: Integration of users via avatars
    [The figure shows an avatar, which was an agent in SIF 1999.]

    Applications
    [...]

    Socionics
    At the end of 1999, the priority program Socionics [...] has started, in which the knowledge exchange between Distributed AI and Sociology shall be promoted. In this project SIF is used as a simulation environment for multiagent systems.
    [...]

    SIF and Electronic Commerce
    SIF is also used in a number of projects to study CHVWs in the context of electronic commerce (e-commerce). [...] Typical for these scenarios is that although the possibility of communication is given, it cannot be assumed, as in any open system, that a communication partner will respond honestly, i.e., disclosing all relevant information and goals. Due to the size of the set of agents and their resource constraints, observability is limited and there is a lack of sufficient data to independently build a model of potential interaction partners. Therefore, learning is of paramount importance in this environment. However, the conditions for it are exceptionally poor. One solution to this problem is the possibility of evaluating the honesty of other agents (their "trustworthiness" if you will). If an agent has the ability to compute trust in other agents, it is able to extremely increase the amount of data needed to build a model of other agents in a very short time. This greatly increases the reliability of automation and minimizes risk. Trust plays a central role in these scenarios.
    [...]
    We have recently completed such a refinement using SIF. It involves modeling trust in other agents and protection mechanisms against fraudulent agents. [...] A game-theoretic model (the open-played prisoner's dilemma) serves as the interaction. The rules of interaction, the agents and their cognitive performances are realized in SIF.
    [...]

    A cognitive architecture for social agents: The CoMMA-Cogs Project
    Within the CoMMA-Cogs project (Cooperative Man Machine Applications - Cognitive Architecture for Social Agents) (COGS99), we are developing an architecture for multi-agent systems that contains two novel features:
    On the one hand, we build agent societies called holonen==holons ([Holonic Multi-Agent Systems. 12th of May 1999]) using recursive agent structures; on the other hand, we extend the InteRRaP agent architecture ([The Design of Intelligent Agents: A Layered Approach. 1996]) with resource-oriented concepts whose parallels can be found in cognitive motivation models ([Modelling motivational behaviour in intelligent agents in virtual worlds. 1998]). The main application, of Cogs is to support animated characters in virtual worlds. In Presence ([Integrating models of personality and emotions into lifelike characters. 1999]), a project accompanying Cogs, we additionally develop emotion and personality profiles for animated agents with speech understanding that guide users through a web site.
    We use SIF as the implementation platform for developing animated agents. To this end, we are developing and extending the system in three directions: First, we are innovating the 3-D visualization in SIF-VW. Second, we use the Voyager Object Request Broker to transparently distribute SIF over a computer network. Third, we construct an interface (API) to integrate agents developed and controlled in other programming languages and controlled to realize a more open distributed system. [...] XMLRPC [...]
    In addition to supporting animated virtual characters in general, we are investigating RoboCup-Rescue as a specific application. [...] aimed at supporting and simulating a rescue operation after a major urban disaster. Such a complex, dynamic scenario places high demands on simulation software. Damage prediction [...] requires the integration of multiple physical models and the computational power that can only be provided by an open fault-tolerant computer network. A RoboCup rescue simulation engine is intended not only to provide training and planning support for future emergencies, but also to play a key role in supporting rescue teams in an actual disaster.

    Summary and Outlook
    In this paper, the fundamentals of the [...] developed basic architecture for intelligent virtual environments have been presented. The core of the generic architecture is the EMS model oriented to the processes in a physical environment. [...] These application scenarios require the development of new methods and technologies, in particular special interfaces between simulation environment and sensors as well as between human users and agents are necessary. Providing the required infrastructure in the form of software and hardware to support this kind of augmented reality is a particular challenge." SuperBingo!!!

    For better understanding, we also quote and translate once again The Proposal publicated and discussed on the 10th of December 1999: "[...]
    2.2.2 Services of an operating system
    The services of an operating system can be divided into two sets:

  • The first set mainly supports a user in the role of a software programmer. It contains services for program execution, file system manipulation, and communication, as well as for input/output operations, and error tracing.
  • The second set does not specifically serve a user, but ensures the efficient execution of the operating system in multi-user mode. It includes the services for resource allocation, accounting with respect to used resources, and the execution of security measures.

    [...]

    5 Summary
    [...] the following assignment of the physiological senses and the muscles of an organism to a possible underlying hardware is proposed:

  • the sensing - the keyboard and the mouse
  • the hearing - the microphone, the network card and the modem
  • the seeing - the video camera and the scanner
  • the muscles - the monitor and the printer
  • the speaking - the loudspeaker, the network card and the modem
  • the already existing brain mass and functionality - the BIOS and the CPU
  • the pulse - the CPU clock"

    Comment
    In addition to the common general analysis of the quoted documents, we have also compared in more detail the Social Interaction Framework (SIF) of January 1999 with the Social Interaction Framework (SIF)-Virtual Worlds (SIF-VW) of January 2000 and The Proposal, and found some highly suspicious changes, which one can also interprete as the first evidences for espionage.

    In relation to this work we also note that a field was presented as new by European research institutes in 1999, which eventually is based on the IVE Oz project of 1992 to 1994, which raises the question what has triggered this trend.
    In this specific case we see that the relatively new Cooperative Man Machine Applications (CoMMA) and the relatively new Social Interaction Framework (SIF) has been updated to SIF-VW (with user in the loop see details below) and have been combined with the fields of IVE and Holonic Agent System (HAS) as well.
    So we have the CoMMA - Multiagent Planning and Scheduling (CoMMA-MAPS) and Social Interaction Framework for Virtual Worlds (SIF-VW) projects and the Holonic Agent System (HAS) and the hybrid agent architecture InteRRaP, and then we have the CoMMA - Cognitive Architecture for Social Agents (CoMMA-COGs) and SIF-VW with CoMMA-COGs as an application of SIF-VW. But the dates do not match. And in large parts it is about 5 to 7 years old matter related to believable agents, IVE, and so on.
    In addition, we have Metaglue and recursive Metaglue with Computational Intelligence. All at exactly the same time when C.S. was creating Evoos.
    That is odd and unconvincing.
    Through the years we learned that it has something in common with the activities of C.S. and our corporation.

    A user is integrated by her, his, or their avatar and uses the GUI to send actions to it and get perceptions from it.

    The title of this work already points to a certain .
    But it is obvious that this work does not merger or even fuse the real or physical (information) space, environment, world, and universe respectively reality and the virtual or metaphysical (information) space, environment, world, and universe respectively virtuality.

    The internal state of an agent is influenced only by its perception and not by direct memory manipulation from the outside, which shows the difference to our Evoos and also our OS, because the spirit can do this as well.
    Furthermore, there are no other connections between the internal and external representation.

    "The basic idea of the EMS architecture is strongly oriented towards a natural mapping of the processes in the "real", i.e. the physical world that surrounds us." A medium is an abstraction of information transport with an internal state and a data model. So no mirror world and no merge of real and virtual worlds respectively realities.

    Failed to steal more of Evoos by the selection of that EMS architecture and description of its medium is a failed attempt to steal our zuordung of senses to ... as proven with no merging and no fusion of real and virtual worlds

    For better understanding, we also quote an online encyclopedia about the subject percept: "In perceptual psychology, percept is the subjectively experienced, conscious (phenomenal) result of a perceptual process. Strictly to be distinguished from the percept are:

  • [...]
  • all cognitive (spiritual, mental) processess"

    But we could not find the term receptor, because the counterpart of the percept is the action in the medium of the EMS architecture.

    The Presence project is another part taken from the Oz project in relation to CoMMA-COGs. But again more than 5 years after it but not mimicked directly. So something triggered their interest several years later and because they did very much that is exactly what we did with Evoos ....
    We have not mentioned IVE in Evoos, because an os is the underlying system and it would have been too much and too unrelated to the main topic of The Proposal. But as can be seen now, other entities noted our interest and intention in this relation.

    Resource-Oriented Computing (ROC) is us, because CoMMA-COGs and Evoos match, but differ in relation to the operating system. But exactly this relation to for example UNIX beyond Interface Agent is the defining element of ROC. In this context, Evoos is not a softbot for UNIX, but UNIX itself as a softbot. This also proves microSOA, Autonomic Computing (AC), and lot more.

    In relation to ROC, mSOA we also have a crystal clear connection to the highly dubious actions of the companies Microsoft and Hewlett-Packard, and with AC in relation to the company IBM. We also have the impression based on our vast amount of experinces made in the more than 2 decades that the company SAP has also its hands in the game.

    But there is a giant problem with this approach, because a trustworthy agent

  • is not resilient in general and
  • is not relevant in particular,

    if the underlying systems are not resilient in general and trustworthy in particular.
    And while we are already at this topic, we note that we cannot see any smart contract transaction protocol, blockchain technique, and Byzantine protocol at all.
    In the comments to the document titled "Intelligent agents: Theory and Practice" and "Watching Your Own Back: Self-Managing Multi-Agent System" and quoted above, as well as in other explanations and clarifications we showed how our Evoos and our OS integrate

  • formal semantics, including operational semantics and denotational semantics, and
  • resilience, including fault tolerance and trustworthiness,

    and solve the problem.

    By comparing the version SIF 1999 and the version SIF-VW 2000 in more detail we found out that the most interesting differences are the quoted figure captions besides the other quoted sections.

    When we read the version SIF 1999, we asked us immediately after the abstract, why the version SIF-VW 2000 has been publicated and presented at all and even as something new. The next questions were:

  • What was new in the version SIF-VW 2000? and
  • What makes the version SIF-VW 2000 different in comparison to the OZ project for simulated Virtual Worlds and the version SIF 1999?

    At first, we thought the answer is easy and would be the extension of SIF 1999 with a Virtual World (VW) or Virtual Environment (VE) due to the different titles of both works. But obviously, that new designation SIF-VW was meant for camouflage and confusion, specifically to mix prior art with new art.

    We also wondered why the version SIF 1999 was not called SIF-VW, because it is already based on a VW and somehow already is SIF-VW, but only the version SIF-VW 2000 was called in this way, which emphasizes the integration of the user and the User Interface (UI) and could have been called SIF-VRE.
    Howsoever, it is obvious that this new designation was added after the publication of the version SIF 1999 and there must be a reason why this action was done later, most potentially after the publication of our Evoos, as explained in further detail below.

    A closer look showed that our Evoos was taken as a blueprint for the update of the version SIF 1999 to the version SIF-VW 2000
    The first evidence is that "Human User Interaction" became "Multi-user operation", which reflects the second set of operating system services. Note that the feature was not designated as multi-user interaction, but operation to come even more closer to the field of operationg system (os) and our Evoos.
    Correspondingly, the user, the agent, and the Console GUI of the version SIF 1999 were integrated in what was called before the information flow, the information flow has been designated the control flow and become a loop, and the agent has become the avatar in the version SIF-VW 2000, as can be easily seen by comparing the figure 6 of SIF 1999 with the figure 3 of SIF-VW 2000 and the figure 7 of SIF 1999 with the figure 4 of SIF-VW 2000.
    But we acknowledge that term avatar is already used in the description of the version SIF 1999, though in an incorrect way for an agent, which can be configured in real-time. But it remains a simulated VW like the one of the Oz project.
    Interestingly, that they also came up with such an older project (Oz project around 1990 to 1994), as we have seen in case of Soft Computing (SC or SoftC) and other older projects and fields, including for example the fields of Intelligent Agent System (IAS), Intelligent Environment (IE), and Cybernetics. We already mentioned these other odd observations of activities, which happened exactly at the same time, in the same weeks and few months, when C.S. created our Evoos and wrote The Proposal, and said that they all were focusing on our activities at that time.
    We also note that the version SIF 1999 was classified by the keywords Information Theory, Coding Theory, Signal Processing.

    The second evidence is the integration of the Social Interaction Framework (SIF) and the Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents (CoMMA-COGs), which was also missing in the version SIF 1999, but was presented with the version SIF-VW 2000.

    And by another comparison with The Proposal we also found the third evidence in direct connection with operating system and multi-user mode, which is Reource-Oriented Computing (ROC) and the resource-awarness of CoMMA-COGs and also its description, which reminds us of an operating system as well.

    As the next implication, we also got the explanation why CoMMA-COGs is based on the Holon Manufacturing Agent System (HMAS), which is resource-aware and based on the reactive-deliberative respectively hybrid agent architecture InteRRaP, which is a layered BDI agent architecture. The latter closes the circle with the field of Cognitive Agent System (CAS).

    But at this point the implications do not stop, because we can also see that they have added trust respectively resilience (e.g. fault tolerance and trustworthiness) with the version SIF-VW 2000, which is also a feature of the Distributed operating systems (Doss) TUNES OS and Aperion (Apertos (Muse))), and in this way stolen this os property by putting it into the middleware as well, like all the other os features.
    Now, it should also be crystal clear (once again) why we have as one of the basic properties of our OS (mostly) being validated and verified, and with the Ontologic File System (OntoFS) component, Berkeley Open Infrastructure for Network Computing (BOINC), and so on, and also the smart contract transaction protocol and the blockchain technique.

    Furthermore, SIF-VW 2000 is only used as development environment for CoMMA-COGs, which means as a simulation environment.

    Moreover, we are now talking about a different type of immersive 3D Virtual World (VW) or Immersive Virtual Environment (IVE or ImVE) respectively Virtual Reality Environment (VRE), like the original Metaverse as well, because the user is in the control flow, or better said in the control loop, which is also called Human-In-The-Loop (HITL) and User-In-The-Loop (UITL), and in this way in a feedback loop.
    The latter also points again to the field of

  • Cybernetics, because a feedback loop is a characteristic feature of this field, and also
  • Cognitive Architecture (CogA) and Cognitive System (CogS), like for example the Executive Process/Interactive Control (EPIC) architecture, because UITL is a characteristic feature of EPIC,

    which again are further evidences and show why and how our Evoos was taken as source of inspiration and blueprint for the update of the version SIF 1999 and many other works in the year 1999 and the following years.

    See also the fields of Wearable Computing (WC or WearC), and Humanistic Computing (HC or HumanC) or Humanistic Intelligence (HI), which on the one hand are connected with each other, and on the other hand C.S. has significantly advanced further and taken as one of the many sources of inspiration for the creation of something new with our Evoos, which

  • integrates for example
    • Humanistic Computing (HumanC) with
      • Ontonics,
      • Ontologic Computing (OC),
      • HardBionics (HB) and SoftBionics (SB),
      • Agent-Based System (ABS), Agent-Oriented technologies (AOx), specifically Multi-Agent System (MAS), Holonic Agent System (HAS), Intelligent Agent System (IAS), and Cognitive Agent System (CAS),
      • SoftBionic Computing (SBC), specifically Software Agent and Soft Computing respectively Soft Agent Computing (SAC),
      • Cognitive Computing (CogC),
      • Autonomic Computing (AC), Autonomic technologies (Ax),
      • Resource-Oriented Computing (ROC), Resource-Oriented technologies (ROx),
      • Service-Oriented Computing (SOC), Service-Oriented technologies (SOx),
      • Space-Based Computing (SBC), Space-Based technologies (SBx), also wrongly called Grid, Cloud, Edge, and Fog Computing (GCEFC),
      • and so on,
    • HumanC and CogC with
      • operating system (os), Distributed operating system (Dos), etc.,
      • Binary-Relation Model (BRM), Arrow Logic (AL), Arrow System (AS), etc.,
      • Semantic World Wide Web (SWWW), Dynamic Semantic World Wide Web (DSWWW),
      • and so on,
    • CogC with Mediated Reality (MedR), Mediated Virtuality (MedV), Augmented Reality (AR), and Augmented Virtuality (AV) (eXtended Mediated Reality (XMedR) exclusive Physical Reality (PR) and Virtual Reality (VR)),
    • HAS and CoMMA-COGs with eXtended Mediated Reality (XMedR),
    • and so on,
  • provides the foundation for the
    • fields of quantified self or LifeLogging (LL), and qualified self based on Agent-Based System (ABS), and Cognitive System (CogS) and Cognitive Computing (CogC) in addition to the field of quantified self or LifeLogging (LL) based on Information System (IS) and information processing, and
    • exploration of Humanistic Intelligence (HI) through physiological eXtended Mediated Reality (XMedR), including physiological Mediated Reality (MedR), and much more,

    and also

  • extends the human body (e.g. brain, muscle, etc.) and mind with an artificial body and mind, and even a cybernetical body and mind, which can even be compatible with and identical to the real body and mind (see also our Bridge from Natural Intelligence to Artificial Intelligence (Bridge from NI to AI)).

    Also do not confuse an Augmented Reality Environment (ARE) with its virtual overlay for the real world with a non-immersive or immersive 3D VW or VE respectively VRE, like for example SIF-VW, like the authors did when they designated SIF-VW as "this kind of augmented reality". In fact, there is nothing in this document about a simulation in 3D VW or VR, which points to proper AR.
    This shows how new the field of AR truly was at that time, so that scientist in the field of VR were even confusing these realities. We have mentioned this already in the Clarification of the 6th of May 2016.
    Also note the relation to the field of Cyber-Physical System (CPS) and see the related Clarification of the 18th of July 2021.

    This led to the connection with the plagiarisms titled "Agent Chameleons: Agent Minds and Bodies", "Agent Chameleons: Virtual Agents [Powered By] Real Intelligence", and "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents". In fact, the fraudulent authors of these documents recognized too late in 2002 or 2003 that immersive 3D VW or VE respectively Virtual Reality Environment (VRE) was added and Augmented Reality Environment (ARE) and Augmented Virtuality Environment (VRE) and hence Mixed Reality Environment (MRE) were already included in our Evoos described in The Proposal through the fusion of realities (if in doubt, then one should ask the question why C.S. also assigned the pulse in this context), which is a difference that is even not in SIF-VW with CoMMA-COGs.

    As we explain in the comment to the related fraudulent works "Agent Chameleons: Agent Minds and Bodies", "Agent Chameleons: Virtual Agents [Powered By] Real Intelligence", and "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents" below, they all have seen that physical, mixed, and virtual realities are related to our Evoos, but they have not seen that we did it already all with the assignment of the natural modalities and other matters to the artificial modalities and other matters as part of our Caliber/Calibre and cybernetic reflection, cybernetic self-portrait, cybernetic augmentation, and cybernetic extension, etc. not understood.
    At this point, one can also see that they have also recognized too late the integration of VR and Cognitive Computing (CogC), and also Humanistic Computing (HC or HumanC), which is also referenced by the using the term "real intelligence".

    We can also see now very well once again, that they all do not have the fusion of real or physical and virtual and metaphysical (information) spaces, environments, worlds, and universes respectively realities.

    But at this point the implications do not stop, because we have here also asynchonous networking and virtualization, which they have put into the middleware at first (e.g. only Web Services (WSs) and Java Virtual Machine (JVM), a VM for a logic programming language, etc.), but after we publicated our Ontologic System (OS) they have taken and even stolen all these virtualized properties and other functionalities and services, and put them back into the kernel space or operating system layer.
    Because our Evoos is the first act of the creation of a(n)

  • cybernetic self-portrait, cybernetic self-augmentation, and cybernetic self-extension,
  • cybernetic self-reflection as a proposition of an ontological argument or ontological proof, and
  • ontological argument or ontological proof as a multimedia system and the OS, and
  • totally new system architecture or design,

    these os functionalities and services are copyrighted as part of our Evoos with its Evolutionary operating system Architecture (EosA) and our OS with its Ontologic System Architecture (OSA) as well.
    And all FOSS projects have to remove the parts of our Evoos and our OS, because they have at least to reference C.S. as creator, as required by law and by their illegal licensing practice, but cannot reference C.S. and get a license, because all FOSS and other types of license are incompatible with this legal requirement. We just give no allowance to do so, which we also have the moral right for. :)
    If we are making compromises, then all other entities have to do so as well and that is definitely not the way it worked in more then the last 2 decades.

    All the other stuff are free to use, but not our creations, improvements, and contributions.

    Definitely, espionage was going on at that time, which the allowance and license for the performance and reproduction of certain parts of our OS is only provided for a written revelation and confession in relation to CoMMA-COGs, SIF-VW, OntoAgent, OntoJava, and also Agent Chameleons, Nexus, and everything else.

    We quote a document, which is about Visual Programming (VP), Intelligent Agent-Based System (IABS), and Virtual Environment (VE), and was publicated between the 13th to 14th of December 1999: "Reality and Virtual Reality In Mobile Robotics
    [...]
    This paper proposes the use of [Virtual Reality Modelling Language (]VRML[)] in the visualisation of intelligent agent communities. We present the Virtual Robotic Workbench, which by monitoring the mental states of agents in a multi-agent system, can display and update a VRML representation of the agents environment. An obvious application of such technology is robotics.
    The Social Robot Architecture (SRA) combines reactivity, deliberation and social ability to enable robots to deal competently with complex, dynamic environments. We demonstrate how the Virtual Robotic Workbench can present a virtual window into the robots' environment, allowing for remote experimentation and thus providing insights into the inconsistencies between a robot's mental image of an environment and the physical counterpart. Figure 1, below, presents an architecture, which seamlessly integrates, real world robots, multi-agent development tools, and VRML visualisation tools into a coherent whole.

    Fig. 1. Social Robot: The Coherent Whole
    LAN

    Virtual Reality & SImulation

    Internet WWW
    Remote viewing/experimentation

    Local viewing/experimentation

    Proxim WaveLAN
    [swarm of] Nomad Scout II [mobile] robot

    Virtual Reality
    VRML (Virtual Reality Modelling Language) is a recent advancement in Internet technologies [...]. VRML allows for the development of dynamic 3D worlds, which can be viewed through a web browser. [...] Some applications of the technology are as follows:

    Virtual communities
    A substantial area of VRML research is the provision of software tools which enable user immersion in a connected virtual community. Several such VR toolkits are available which enable the creation and deployment of multi-user interactive virtual communities. [...]

    E-commerce
    ViSA (Virtual Shopping Agent Architecture) [...] is representative of a new generation of e-commerce system that enables the user to truly enter the retail arena and participate in the virtual shopping experience. The ViSA System offers: immersion within a 3-D shopping environment and virtual community of shoppers; intelligent assistance in all stages of product procurement; contextualised and personalised shopping experience for individual shoppers.

    Robotics
    Many exemplary systems demonstrate the use of VR for simulation and visualisation within robotics. Here we consider but four subsystems.
    [...] model and display simulations of workcell layouts [...]
    [...] The central scientific goal of the [mobile robot] project is the analysis and synthesis of computer software that can learn from experience. The team believes an essential aspect of future computer software will be the ability to flexibly adapt to changes, without human intervention. The ability to learn could soon enable robots [...] to perform complex tasks, such as transportation and delivery, tours through buildings, cleaning, inspection, and maintenance. [The cited document does not mention this universal scientific goal at all, at least not in the sense of our Evoos. See the comment for the details.]
    While giving tours in the [...] Museum, [the mobile robot] can be observed and even teleoperated through the Internet via a virtual reality medium. [...] will make available on-line camera images recorded in the museum.
    [...] is a robot simulation system that enables the telemanipulation of real robots via WWW. It provides a detailed graphical model of the robot that can be manipulated intuitively using the mouse. [...]
    [...] provides a working model of its Autonomous Environmental Sensor for Telepresence (AEST). This robot tours the inside of a building and automatically creates a 3-D map of the interior comp[l]ete with surface texture information. The 3-D reconstruction module produces two models; a geometrical model suitable for conventional [Computer-Aided Design (]CAD[)] systems, and another composed of triangular meshes suited to graphical visualisation. Both representation use VRML format and are thus viewable with any WWW browser.
    [...]

    The Social Robot Architecture
    The Social Robot Architecture aims at achieving team building and collaborative behaviour through the judicious synthesis of the reactive model with that of the deliberative model. The architecture (figure 2) is comprised of four discrete layers: physical, reactive, deliberative developed using Agent Factory, and social.

    Fig. 2. The Social Robot architecture: The Robot Agent
    Social
    ACL: Teanga
    [...]

    Deliberative
    [...]

    Reactive
    [...]

    Physical
    Motor Controller
    Motors
    Sensors: proximity [] sonar [] odometry [] vision
    Digital Signal Process[ing (DSP]

    Physical: Robots in terms of this research may take the form of either that of a physical entity, [...] or a simulated entity [...].
    Reactive: A series of fundamental reflex behaviours are implemented at this level. The sensory information is processed resulting in clear agent_events and communicated to the deliberative level. Agentjoommands are received from the deliberative layer.
    Deliberative: This comprises of a Belief Desire Intention (BDI) [...] architecture developed through Agent Factory. [...]
    Social: Our agents interact via an Agent Communication Language (ACL), entitled Teanga.

    Agent Factory
    Agent Factory has been developed to facilitate the rapid prototyping of MultiAgent Systems. The system offers an integrated toolset that supports the developer in the instantiation of generic agent structures that are subsequently utilised by a prepackaged agent interpreter that delivers the BDI machinery. Other system tools support interface customisation and agent community visualisation.
    In creating an agent community three system components must be interwoven, those of agents, a world and a scheduler.
    The agent is the core computational unit underpining Agent Factory, it combines a series of attributes that represent and support the Mental State model of an agent, a set of methods (the actuators), a set of perceptors, an Agent Communication Language (ACL), and a Commitment Revision Strategy. This design is then executed using a generic Agent Interpreter. [...]
    The creation of an agent community is facilitated by the Agent Factory Development Environment, which provides a Component Library and a selection of tools for the rapid prototyping of agent communities. [...]
    The Agent Factory Run-Time Environment provides the support necessary for the release of a completed Multi-Agent System. This environment comprises of a Runtime Server and an Agent Interpreter. [...] Access to these environments is provided both locally through Graphical User Interfaces (GUIs) and remotely through the World Wide Web (WWW) via a purpose built Web Server.
    [...]

    The Virtual Robotic Workbench
    One of the key tenants of our research has been the provision of multiple views of multiple robot systems. The primary view is the physical perspective of the Nomad Scout II's navigating the physical world. The secondary, more abstract view, is a virtual reality perspective provided via the Virtual Robotic Workbench, which delivers a 3-D VRML world via the Internet (figure 3).
    Herein we harness the advantages of using virtual environments, by directly relating virtuality and reality in a seamless manner. This permits multiple views, information hiding and abstraction, system interaction, and behaviour scrutiny via snapshots and recordings. [...]
    [...]

    The Virtual Reality Visualiser
    [...] Within the context of Agent Factory, the Virtual Robotic Workbench utilises an existing tool, the Agent Factory Visualiser (AFV) to present a 3D view of the agents. [...] The Visualiser tool facilitates the presentation of Virtual Reality views of the agent community commissioning standard web browsing technologies. [...]
    [...] the agents' movements are mirrored in the VRML Scene.
    Figure 5 illustrates how an agent's virtual position may be updated based upon its physical position. [...]
    Each agent handles agent_events about that event. However, the update of the Virtual Robots' position does not occur through any deliberative action on the part of the agent. Instead, a tap is placed upon the event queue, which listens for a landmark agent_event. Upon detection of this event, the coordinates are taken and converted to the absolute coordinate system used by the Virtual Agents. Subsequently a system message informs the Visualiser to update the position of a given virtual robot.
    [...]
    This results in a real-time link between [from] the real robot and [to] its virtual counterpart.

    Experimentation
    Robot experiments are characterised by firstly selecting a world, subsequently situating robot(s) in this world, and finally ascribing behaviours to these robots. In this approach the odometric information sent by the real-world robot to Agent Factory is supplemented with sonar and visual information that may indicate detected environmental features, and relative distances from these features. Such sensor fusion enables the robot avatar position update not only by mirroring the uncertain realworld coordinate position updates, but also by matching current sensory information with the VRML world to reduce the uncertainty of the robot's position.
    Of course, we do not aspire to have a completely accurate real-time synchronisation between real and virtual robots. The VRML view is primarily a visualization tool and therefore it is sufficient to update and recalibrate the virtual world stochastically.
    [...]

    Stochastic Synchronisation Between Worlds
    The trade off between perception and representation has been extensively documented within AI literature. In a conventional sense mobile robots sense (perceive) their environment as they navigate through it. The emphasis is clearly upon perception rather than representation. Within the Social Robot Architecture we redefine this trade off. Perceptions are transformed into beliefs about the environment within which the robot exists. As such, perceptions underpin a subsequent thin representation of the environment. Individual robot perceptions and corresponding beliefs may subsequently be propagated to fellow robots in keeping with the social nature of our architecture. Thus, given the inherently myopic nature of our Nomads, an inexact and incomplete representation of the environment is derived through the augmentation of other partial views communicated by fellow robots. [...]
    When we contrast this with our virtual robots then the perception representation trade off is somewhat different. Our virtual robots have currently no perception capability. They do however have a fairly rich representation of the virtual environment, a direct replica of its real counterpart. By taking receipt of Agent Factory events the world is refreshed to reflect robot transformations. A vast amount of sensory data is culled by the Nomads and filtered through key feature footprints, which encode patterns of sonar data synonymous with particular features. [...] Rather than this being continuous it is stochastic in nature. [...]
    To date we have concentrated upon the flow of information from the real world to the virtual. A counter flow of data could be harnessed and as yet we have not truly investigated this. Given the richer representational model held by the virtual environment (i.e. a building comprised of floors, in turn comprised of rooms interconnected by corridors and populated with objects and robots) upon recalibration of the robot position, beliefs about the immediate location could be funnelled back to the robot agents. [...] This bidirectional information flow seems to offer mutual benefit.

    Discussion and Conclusions
    [...]
    Within the context of this work we regard robots as active intelligent objects. We would envisage that smart buildings would be comprised of a collection of intelligent interacting components some of which might be static (Heating Subsystem, Lighting Subsystem, Entry Controller) or dynamic (robots, AGVs, Lifts). Robot agents in the context of the Social Robot Architecture permit effective collaboration through their social nature. We advocate an agent community for the delivery of the intelligence necessitated in such smart buildings. [...]"

    Comment
    First of all, we would like to clarify that the cited document related to the mobile robot, which gives tours in a museum, does not mention that the central scientific goal of the [mobile robot] project is the analysis and synthesis of computer software that can learn from experience at all, specifically and explicitly not in the sense of the functionality of our Evoos.
    In fact, the adaption is merely related to updating the internal map and planning the path of a mobile robot as part of the basic tasks of localization, collision avoidance, and path planning.
    Indeed, this cited document mentions "Resource adaptability. Several resource-intensive software components, such as the motion planner or the localization module, can adapt themselves to the available computational resources. The more processing time is available, the better and more accurate are the results [of the] resource-adaptive algorithms for state estimation and planning".
    But we do think that this resource adaptability is only based on a simple control function and therefore has nothing in common with learning in particular and the fields of Intelligent Agent System (IAS), Resource-Oriented Computing (ROC), and Autonomic Computing (AC) in general.

    Furthermore, a stochastic property is well described by a random probability distribution. What this has in common with Real-Time Computing (RTC) is beyond our fantasy, because the detection of an event at random is the opposite of the (continuous) response on an event within specified time constraints or deadlines.

    The Agent Factory is the agent runtime environment, which is also used for the works "Agent Chameleons: Agent Minds and Bodies", "Agent Chameleons: Virtual Agents [Powered By] Real Intelligence", and "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents" quoted below. "The Deliberative level is provided via a Belief-Desire-Intention (BDI) methodology. The deliberation mechanisms are based upon those of Agent Factory (AF) [...]."

    The quoted document is basically about Mobile Robotic Systems (MRSs), but does neither mention a Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot), nor a software agent or softbot, interface agent, believable agent, social agent, emotional agent, user agent or avatar, and so on.
    Only in the last section of the last chapter static intelligent (intelligently(?)) interacting agent-based components of a smart building are mentioned, which is relevant exactly like the dates of publication, but seems to be added to only match the main topic of the 1st International Workshop on Managing Interactions in Smart Environments (MANSE '99) and push out a publication as quick as possible.

    The same holds for the bidirectional flow of data between the real mobile robots and the VRE.
    The quoted document is also basically about a mirror world with unidirectional data flow from the real mobile robots to the VRE.
    Only in the last sentences of the second last chapter a counter flow is suggested and the mutual benefit offered by such a bidirectional information flow is mentioned, which is also relevant, like the dates of publication, but also seems to be added to only match the related property of our Evoos and the main topic of the 1st International Workshop on Managing Interactions in Smart Environments (MANSE '99) and push out a publication as quick as possible.

    Howsoever, even with a bidirectional data flow we only have a connection of real and virtual (information) spaces, environments, worlds, and universes respectively realities, but not an augmentation or even a fusion of them.
    This difference is also shown by the fact that we do both

  • localizing in an (information) space, environment, world, and universe respectively reality and
  • learning, constructing, and modifying an (information) space, environment, world, and universe respectively reality

    on the basis of our Evoos and Caliber/Calibre.
    Indeed, the document also considers in relation to "the use of VR for simulation and visualisation within robotics" a robotic subsystem, which "tours the inside of a building and automatically creates a 3-D map of the interior[, which again] produces two models; a geometrical model suitable for conventional CAD systems, and another composed of triangular meshes suited to graphical visualisation."
    But this robotic subsystem's functionality is not used and applied at run-time, for example by using the so-called External Authoring Interface (EAI) of the VRML specification to update the

  • virtual robots' positions in a VRML scene or
  • representational model held by the VE replicating the real environment

    by the agents' views and other sensor data, but only by the agents' movements and locations within said VE.
    It also does not solve the foundational problem of exact localization of the real mobile robot, which is the result of inaccuracy of the physical sensors.

    The title of this work already points to a certain merger of the real or physical (information) space, environment, world, and universe respectively reality and the virtual or metaphysical (information) space, environment, world, and universe respectively virtuality.
    But it is obvious that this work does not fuse both (information) spaces, environments, worlds, and universes respectively realities.

    The quoted document was also presented on the 1st International Workshop on Managing Interactions in Smart Environments (MANSE '99), like the document titled "Meeting the Computational Needs of Intelligent Environments: The Metaglue [Multi-Agent] System" quoted above.

    We have seen exactly the same odd anomalies with the projects Metaglue, CoMMA, and Cognitive Grid, the field of Cyber-Physical System (CPS), and with other things so on many times throughout the years, and somehow could show many unexplainable and conspicuous features, but also deficits in comparison with our works.

    We also can see now that all these works, including "Metaglue", CoMMA, "SIF-VW", and "Social Robot Architecture (SRA)" and "Reality and Virtual Reality In Mobile Robotics" (foundations of "Agent Chameleons" and "NEXUS") are following a trend, which we also mentioned in relation to

  • AL (self-organization, cognitive agent simulator, Holonic Manufacturing System (HMS), and VR at least of the years 1994 and 1995), and
  • Oz ("Believable Social and Emotional Agents", layered or hybrid agent architectures, and simulation and animation of agents in VR at least of the year 1996).

    In this relation it is very revealing that this work "Reality and Virtual Reality In Mobile Robotics" University College Dublin and the works "Metaglue" Massachusetts Institute of Technology (MIT), and "Agent Chameleons" and "NEXUS" University College Dublin and MIT Media Lab Media Lab Europe are from the same 2 research institutes and have the same unexplainable and conspicuous features and deficits. We do know why that is the case through other evidences gathered since more than 2 decades.
    We have here also another evidence of a connection between the MIT Media Lab and the DFKI, as observed and alleged by us, and since several years we do know which companies are the relays for informations. It is quite obvious.

    And there is one attractor in that chaos, which brings order: our Evoos.

    With their works after 1999 we were able to caught all those fraudulent entities in the act (see the quotes and comments to the work "CoMMA-COGs" and Agent Chameleons" and "NEXUS").

    We quote a document, which is about Visual Programming (VP), Intelligent Agent-Based System (IABS), and Virtual Environment (VE), and was publicated in 2000: "Visual Programming Agents for Virtual Environments
    [...]
    As virtual reality systems become more commonplace, the demand for VR content and applications will rise. One of the more difficult tasks to design and implement in a virtual environment is dynamic behavior. Since the presence of humans in a virtual environment introduces asynchronous unpredictable behavior, inhabitants of a virtual world should react in an intelligent way to these events. A common solution to this is through the use of intelligent agents, since they can dynamically respond to changes in the environment. Their use in virtual environments has become increasingly common, not only for simple animal-like inhabitants of a world, but also as tutors and guides in learning and collaborative environments.
    [...] This paper details such a system. Called HAVEN (Hyperprogrammed Agents for Virtual ENvironments) it combines a generic agent architecture and a visual programming language, allowing visual specification of behavior.

    Previous Work
    [...] ALIVE system [Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments. 1995], Improv [Improv: A System for Scripting Interactive Actors in Virtual Worlds. 1996], and Oz [Virtual Reality, Art, and Entertainment. 1992] These systems generally combine reactive agent architectures with computer graphics to produce autonomous virtual creatures. [...] Herman-the-Bug [Lifelike Pedagolical Agents for Mixed-Initiative Problem Solving in Constructivist Learning Environments. 1999] is a believable agent that acts as a tutor in an interactive learning environment.

    [...]
    [...] Toontalk [...]

    [...]
    [...] use an existing agent architecture. InterRap [...]
    The agent architecture was implemented with additional changes including: converting the basic algorithms to a multi-threaded design and incorporating a distributed scene graph (a database of geometry and transformations stored as nodes in a tree), to handle agent appearance, and adopting it for use in virtual reality environments. Additionally a better motor control system was developed based on the [ALIVE system]. [...]
    The agent's input system is composed of sensors giving the agent perception. These sensors are bound to nodes in the agent's representational scene graph. This is required because some sensors, such as synthetic vision sensors need to consider the orientation of the sensor, when providing information.

    [...]
    HAVEN allows for a generic agent to have its behavior programmed visually. These agents are design to run in an immersive virtual environment."

    Comment

    Oz with Tok architecture and HAVEN with InteRRaP architecture also show why and how we seamlessly transitioned to Artificial Neural Network (ANN), semantic network, Resource Description Framework (RDF), hypergraph, PROgramming with Graph REwriting Systems (PROGRES), scene graph, etc..

    We quote a document, which is about the fields of Mutli-Agent System (MAS) and Intelligent Virtual Environment (IVE or IntVE), and was publicated in 2001: "Multi-agent Systems as Intelligent Virtual Environments
    Abstract. Intelligent agent systems have been the subject of intensive research over the past few years; they comprise one of the most promising computing approaches ever, able to address issues that require abstract modelling and higher level reasoning. Virtual environments, on the other hand, offer the ideal means to produce simulations of the real world for purposes of entertainment, education, and others. The merging of these two fields seems to have a lot to offer to both research and applications, if progress is made on a co-ordinated manner and towards standardization. This paper is a presentation of [Virtual InTelligent Agents with Logic (]VITAL[)], an intelligent multi-agent system able to support general-purpose intelligent virtual environment applications.

    [...]
    The notion of an intelligent agent, indisputably challenging to define precisely, has been used to characterize a vast number of approaches and applications, ranging from simple softbots to complex, large-scale industrial control systems.
    Recent attempts to merge intelligent agent approaches with virtual reality and artificial life have given birth to the field of intelligent virtual environments (IVEs).
    An IVE is a virtual environment resembling the real world (or similar), inhabited by autonomous intelligent entities exhibiting a variety of behaviours. These entities may be simple static or dynamic objects (a revolving sun, traffic lights, etc.), virtual representations of life forms (virtual animals and humans), avatars of real-world users entering the system, and others.
    [...] Sophisticated simulated environments of different types (open urban spaces, building interiors, streets, etc) can significantly aid in architectural design, civil engineering, traffic and crowd control, and others. In addition, precisely modelled simulations of real-world equipment (vehicles, aircrafts, etc) not only can be tested at reduced cost and risk, but also more accurate results can be obtained thanks to the additional element of control by and interaction with intelligent, thus closer to real life, entities. Moreover, IVEs have set new standards in computer-aided entertainment, through outstanding examples of computer games involving large, life-like virtual worlds (where imaginative scenarios are to be challenged), interactive drama (where the user is an active participant in the plot) virtual story-telling, and many other areas where immersion and believability are key factors. Concluding, IVE-based educational systems incorporate believable tutoring characters and sophisticated data representation techniques, resulting in the stimulation of user interest and perceptual ability, thus providing a novel, effective and enjoyable learning experience.
    Despite the fact that an intelligent agent is the ideal metaphor for representing intelligent inhabitants inside an IVE, surprisingly little effort has been directed towards a formal and co-ordinated merging of intelligent agent systems and virtual reality techniques to produce IVEs fully exploiting the advantages of both fields. [...] control systems, distributed problem solving, resource allocation [...]

    [...]
    [...] Beliefs-Desires-Intentions (BDI) model [...]
    [...] Intelligent Resource-bounded Machine Architecture (IRMA), an architecture for resource-bounded (mainly in terms of computational power) deliberative agents, based on the BDI model. [...]
    [...] GRATE, an architecture clearly focused on co-operative problem solving through agent collaboration. Central to the entire architecture is the notion of joint-intentions. In fact, even though GRATE is a deliberative architecture based on the BDI model, it is specifically referred to as a belief-desire-joint-intention architecture.
    The BDI model has provided valuable theoretical grounds upon which the development of several other architectures and approaches, such as hybrid and layered agents, was based [9]:
    The Procedural Reasoning System (PRS) [7] is a hybrid system, where beliefs are expressed in first-order predicate logic and desires represent system behaviours instead of fixed goals.
    [...] INTERRAP, a layered agent architecture focusing on the requirements of situated and goal-directed behaviour, efficiency and co-ordination. [...]
    Reusable Task Structure-based Intelligent Network Agents (RETSINA) architecture. The architecture consists of three types of agents: interface, task agents and, information agents.
    Due to its apparent focus on high-level reasoning and generation of elaborate behavioural patterns, the BDI model seems to be inadequate to efficiently and effectively model all aspects of intelligent reasoning. [...]
    The merging of intelligent agent systems, artificial life and classical VR techniques has given birth to the field of Intelligent Virtual Environments (IVEs). Typical examples involving IVEs and general virtual agents include Humanoid [2 [ The HUMANOID Environment for Interactive Animation of Multiple Deformable Human Characters. 1995]], Creatures [8 [Creatures: Artificial Life Autonomous Software Agents for Home Entertainment. 1997]], Artificial Fishes [17 [Artificial fishes: Autonomous locomotion, perception, behavior, and learning in a simulated physical world. 1994]], and others.
    The CoMMA-COGs [5] project (Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents) is an architecture for Multi-Agent systems and animated virtual environments [...]. The system employs traditional multi-agent research approaches. Furthermore, it supports self-organization of agent societies [as Holonic Agent System (HAS)], so that external users perceive them as units, and, thus, being unaware of the underlying organization processes. In addition, resource-awareness allows agents to perform in unpredictable environments while flexibly managing their resources. In general, IVEs tend to focus on either the virtual representation and embodiment side, or the intelligence side. Full benefit has not yet been taken of the combined advantages of intelligent multi-agent systems and virtual environments. A complex, accurately modelled and general-purpose IVE, inhabited by numerous believable entities driven by strong and effective AI reasoning processes, is yet to be presented.
    A predecessor to the VITAL system and a first effort towards an intelligent agent system architecture with the ability to support IVE applications, the DIVA architecture [...] [18 [ DIVA: Distributed Intelligent Virtual Agents. 1999]

    [...]

    Multiple Agent Support and Inter-agent Communication
    [...]
    Agents are able to communicate using virtual, non-visual speech items. [...] KQML [6] performatives can be modelled using a suitable set of properties; such a set could include properties such as 'MSG-TYPE', 'VERB', etc, to denote intent, tone, and other information exchanged according to KQML format. [...]

    [...]
    [...] the system is distributed, thus able to exploit the benefits of today's sophisticated networking technologies and the Internet; it employs formal AI techniques - logic programming, planning, intentional reasoning - to support intelligent agent behaviours; it is modular and component-based, enabling the deployment of persistent applications; different types of agents - not necessarily built according to the structure proposed by the architecture, but using the same communication scheme - can be connected to a world server, providing openness and extendibility, as well as enabling dynamic alteration of a simulation's structure and experimentation with other reasoning approaches; the system incorporates sophisticated VR techniques to produce intuitive and believable visualisations; finally, the system comprises a set of state-of-the-art software applications [...].

    [...] Executable intensional languages and intelligent multimedia, hypermedia and virtual reality applications [...]"

    Comment
    We note three important points in relation to the state-of-the-art at that time:
    Only VR. What is called AR is not proper AR at all.

  • PR is only real, physical, tangible objects,
  • AR is augmentation of real objects with virtual objects, while
  • AV is augmentation of virtual objects with real objects,
  • VR is only virtual, metaphysical, intangible objects
    Several Agent-Based Systems (ABSs) merely use the
  • Knowledge Query and Manipulation Language (KQML) and its successor Foundation for Intelligent Physical Agents (FIPA) Agent Communication Language (ACL) (FIPA-ACL), and
  • Defense Advanced Research Projects Agency (DARPA) Agent Markup Language (DAML)

    for communication and messaging between agents, but no ontology.
    FIPA uses frame-based ontologies and an ontology service for the lifecycle management, the service interfaces, and the speech-act [speech act] communications of agents, and the domains respectively semantics of the message contents for interoperabiltiy.
    Evoos and OS have everything.

    We quote a first document, which is about the fields of Mobile Agent System (MAS or MobAS), IE, and IVE, and was publicated between the 7th to 9th of May 2003: "Agent Chameleons: Agent Minds and Bodies
    Abstract
    Agent design has to date concerned itself with the issues pertaining to a single body embedded in a single environment, whether virtual or real. This paper discusses the notion of an agent capable of migrating between information spaces (physical worlds, virtual reality, and digital information spaces). An architecture is presented that facilitates agent migration and mutation within such environments. This will in turn support agent evolution the ultimate in agent adaptivity.

    Introduction
    The Agent Chameleon Project strives to develop digital minds that can seamlessly migrate, mutate and evolve on their journey between and within physical and digital information spaces. This challenges the traditional boundaries between the physical and the virtual through the empowerment of mobile agents. Three key attributes mutation, migration and evolution underpin the Agent Chameleon concept.
    The ultimate survival and longevity of agents is predicated by their ability to sense, react and respond to environmental change. The response can take the form of migration across a wireless network, mutation of agent form, or evolution of the agents' form and associated capabilities. The form of an agent inextricably dictates or constrains its behaviour and capabilities within a particular environment. The optimum form is very much dependent upon its world [2 [Agents, Mobility and Virtuality: A Necessary Synergy. 11th to 13th of December 2000]. [...]
    Within this paper an architecture and agent structure is described which supports seamless migration and mutation across platforms and within environments. Such agent adaptivity and mobility has thus far not been investigated in the literature. [Obviously, that is a wrong statement and even a lie to confuse the public in relation to our Evoos.]

    Related Research
    The Agent Chameleons project draws on a number of major bodies of research and seeks to extend current interpretations of agent systems, virtual environments, and embodied systems (robotics). This work builds upon seminal work conducted by the Collaborative Virtual Environment (CVE) community. Notable pioneering systems that incorporate agent-based techniques include DIVA (Distributed Intelligent Virtual Agent) [... (see also the quote about its successor VITAL above)], MAVE (Multi-agent Architecture for Virtual Environments) [...].
    The realisation of evolvable characters in virtual environments will draw inspiration from such work as Synthetic Characters [...] and work on agents as synthetic characters [...] [27 [Believable Social and Emotional Agents. 1996]].
    Although the principles of mobile agents have started to develop [...], few have embraced a true sense of mobility of an agent through information spaces. The term mobile agent has incorrectly referred to data flow between agent mechanisms, mobile components of a static agent, and notions of data inheritance of one agent from another. [...]

    Environment & Embodiment
    Terminology and its misuse continuously create confusion. This section reflects upon the terms situatedness, embodiment, and immersion and the interplay between these terms in order to set the foundations for subsequently presenting work on agent migration and mutation.

    [...] the virtual representation of an agent in virtual reality in the form of an avatar and controlled through such devices as data-gloves is often referred to as immersion of the user in VR. Similarly, when this agent migrates to a hardware platform, the primary context for actuation and sensing becomes the physical world, which is often referred to as physical embodiment.
    [...]
    Context is the all-encompassing term that is instantiated as situatedness, embodiment and immersion in different fields of research. Context constitutes a metalevel concept, which associates the actions and perceptions of a system with its environment. [...] The idea of context in artificial systems now has a new dimension. In this work, the specifics of the context for the Agent Chameleon equipped with the ability to migrate between different environments, changes. It can be immersed in VR, embodied in a robot, or situated on a PC or PDA accessing the Internet or databases. In order to do this, the traditional issues regarding mind and body in AI come to the fore.
    Agent Chameleons transcends the often-misused notion of embodiment in AI by emphasising the more appropriate/reflective issue of embodiment: complete adaptivity. [...] The Agent Chameleons project takes an alternative slant on immersion: a stronger sense of context and adaptability as realised in a seamless integration across virtual reality and the actual physical reality. That is, the agent is so immersed in the context that both physical and virtual worlds merge. [How can an agent be immersed in a context, if the context is instantied as immersion? Immersed in immersion is nonsense and seems to be blah blah blah.]

    Mind & Body
    [...] the mobile agent can be viewed as an artificial mind with the capacity to change its form by possessing different bodies in different information spaces (i.e. robot in physical space, avatar in VR space). This technology provides for a very interesting turn in the arguments dealing with the development of an intelligent entity and the requirements for strong embodiment in physical reality [...]. One of the primary criteria for the realisation of an intelligent entity is the integration of the context into the design and implementation of the controlling architecture of the entity. Classical AI begged to differ. Descartes, in [Meditationes de Prima Philosophia==]Meditations [On First Philosophy, 1641,] aimed to show that mind is distinct from body in his study of the human body as a machine.
    The two perspectives mentioned previously, namely the dualist approach where mind is distinct from body and the embodied approach where mind and body aim to function as one, have aimed with moderate degrees of success to bridge the gap between designed and realised behaviour. Collectively, these approaches are key to the development of the Agent Chameleon. While this can be viewed as a dichotomy, the provision of context for the agent mind, which has the capacity to migrate between bodies, must be implemented in order to achieve the successful realisation of an Agent Chameleon.
    Agents that can migrate and mutate their embodied form present significant research opportunities, namely (a) the digital space can become more embedded in our own space and vice versa, (b) the agent can overcome the traditional shortcomings of being constrained to a particular information space, and (c) the classical interpretations of real-world attributes superimposed on an artefact such as physical geometry and constraints (gravity) become less pertinent in VR worlds.

    The Agent Chameleons Architecture
    The Agent Chameleons project extends the traditional notions of an agent environment and its constraints by expanding through mobility/migration and mutation to virtual environments (i.e. avatar), physical environments (i.e. robot), and software environments (i.e. OS desktops, PDA's) (see figure 2). This capacity to change the context of the agent's actions as it migrates necessitates a new approach to the traditional interpretations of how the environment affects the reasoning mechanisms of the agent.

    Figure 2. Agent Chameleon Architectural Strata
    Mobility Layer
    Whole Agents
    Agent Corpus
    Agent Mental States

    Agent Layer
    Agent Chameleon API

    Underlaying Java Machine
    Java API
    Java Virtual Machine

    Platform Layer
    Environment
    Mobile Device
    Physical
    Virtual
    Internet

    [...]

    Agent Architecture
    The architecture of the agents is based upon the Social Robot Architecture (SRA) [...]. Like the SRA, "a modular structure is used to divide the levels of complexity into incremental functionality ... More abstract levels provide increasing complexity and subsume lower level functionality. Reactive or reflex survival behaviours are implemented at the reactive level with more complex behaviours defined within the deliberative level".
    The agent architecture is comprised of three layers - Environmental, Reactive and Deliberative. [...]
    [...] Perceptors are responsible for the monitoring of the environment. They pass relevant information about it to the Reactive and Deliberative layers. On the other hand, Actuators are used to affect the environment and are triggered by information from the Reactive and Deliberative Layers.

    Figure 3. Agent Chameleon Architecture
    Deliberative Layer
    Belief Resolution System
    Commitment Management System
    Planning Mechanism

    Belief Set [Mental State]
    Global Beliefs
    Local Beliefs
    Social Beliefs

    Commitment Rules
    Global Commitment Rules
    Local Commitment Rules

    Plans

    Capability Set
    Evolution Subsystem

    Reactive Layer
    Reflexes
    Reflex Mechanism

    Environmental Layer
    Actuators
    Social
    Platform
    Perceptors
    Social
    Platform

    [...] A series of basic reflexes empower the agent with a collection of survival instincts. [...]
    [...] In order to achieve deliberative proactive agents we use the Belief-DesireIntention (BDI) methodology. Agents are equipped with beliefs about their environment; such as what type of environment it is (e.g. robot, virtual environment, PDA, internet) and what the agent can achieve within this environment. In addition agents are equipped with beliefs about other environments, what constraints are in those other environments and whether they are capable of migration to those environments. A series of commitment rules help to drive the agents towards their goals. The mechanisms employed to maintain consistency across platform migration are based on a functionality set with active and inactive components depending on the instantiation. This facilitates the knowledge set of the agent in choosing possible body instantiations for particular problem sets.
    The deliberation mechanisms are based upon Agent Factory (AF) [8][24 [Far and A WAY: Context Sensitive Service Delivery through Mobile Lightweight PDA hosted Agents. 14th to 16th of May 2002]] [...].

    Capabilities.
    Agent Chameleons are considered as an autonomous, mobile and social entity in the classic multi-agent systems sense. The agent has at any given instance a persona, and associated with a given persona are a given set of capabilities. [...]

    Social Ability.
    [...]

    Migration
    [...]
    Agent migration is achieved through cloning. When an agent wants to migrate it informs the destination that it wishes to do so. The destination creates an agent. The mental state of the agent is only then copied and transmitted to the required destination. Upon receipt it is incorporated into the new agent. The old agent is then disposed of and the new agent begins execution. [...]

    Mutation
    Agent mutation is a core functionality of agent chameleons. The embodied form helps the user and other agents in the recognition and subsequent relationship with the entity. [...] It is our conjecture that the agent persona is inextricably linked to their associated capabilities. The mutation thus results in a change to the associated capability set.
    [...] Agent mutation refers to the agent's capacity to adapt its function and form depending on environmental, platform, and social constraints or freedoms. To illustrate, an agent migrating to a Khepera robot has limited processing capabilities whilst a VR agent avatar can have the power provided by a full operating system. [And once again: Bingo!!!]

    Environments
    The Virtual.
    In order to develop the coherence and fluidity necessary to effectively link the physical and virtual domains, a computational engine as found in computer gaming [or video gaming] is used to merge these traditionally distinct environments and facilitate the seamless migration of the digital spirit from one environment to another. This both enhances and facilitates the control of avatars based on either real world or digital-domain sensory information. [...]
    [...] This system builds upon work in the Virtual Robotic Workbench [12 [Reality and virtual reality in mobile robotics. 13th to 14th December 1999], "which via the Social Robot Architecture integrates the key elements of Virtual Reality and robotics [... and delivers the] Agent Factory"], but has been augmented with the Agent Chameleons framework. [...]

    The Physical.
    For migration into the physical world, agent chameleons can possess robotic devices, such as K-Teams Khepera robot. [...] The Khepera robots are embedded with Sun's kilobyte virtual machine (KVM), which is a VM build with constrained devices in mind. [...]

    The Mobile.
    A version of the system has also been created for use on Pocket PC based Personal Data Assistants (PDA's) such as the Compaq iPAQ. This version contains a simpler interface than that of the full VR based one, with agents appearing as 2D animations [...].

    [The Softbot.] Data.
    In order for the agent to explore the Internet, a web-browsing server is provided. [...] Migration to other information sources like a corporate intranet, or a specific database, could similarly be supported.

    Basic Migration
    This demonstrator illustrates the migration of an agent from a physical, real world, robot to a virtual space and vice-versa. In this experiment, a physical world is extended by a virtual world depicted on a computer screen adjoined to the physical world. Small Khepera robots can navigate and explore the desk-mounted world and dock in a robot garage at the edge of the physical world thus removing the physical robot from vision (see Figure 5). Thereafter the robot seamlessly crosses into the virtual world and a virtual robot continues the trajectory of the physical counterpart into the virtual space.

    External Mutation
    [...]
    [...] While the mutation contained within this example constitutes little more than morphing an avatar, mutation is generally much more complex and results in the change of the external or embodied form and the associated capabilities. The capabilities of an agent are inextricably related to the agent form.

    Survival
    The agents used in this research have been attributed basic survival instincts based on the ability of their environment to support their continued operation. For example an agent would have a perceptor monitoring the power supply to its current environment.

    Conclusion
    Within this paper we have described the proof of concept demonstrators of the Evolutionary operating system (Evoos), which is described in The Proposal titled "Analysis and Design of an Operating System According to Evolutionary and Genetic Aspects", and introduced the concept of Agent Chameleons. Such deductive entities reside within embodied containers and exhibit the key attributes of autonomy, mobility, mutation and an ability to evolve. We regard mutation and evolution as higher order attributes synonymous with chameleon ontology-based, ontologic, and ontonic agents a new and more sophisticated agent class. [Thank you for the flowers.]
    [...] On-going work is examining the derivation of models of trust reliance and dependence within such nomadic agent communities.
    We have provided a brief insight into three proof of concept demonstrators that illustrates the fact that mutation and migration are underpinned with the same base BDI paradigm or architecture. Similar to other actions commitments to mutate and migrate are adopted and actuators subsequently realise these actions."

    Comment
    First of all, we recall that The Proposal is the second version dated 20th of April 2000 and that we will scan and publicate the first version publicated and discussed on the 10th of December 1999 as soon as we find it in our safe.

    We also note, that a related document titled "Agent Chameleons: Migration and Mutation within and between Real and Virtual Spaces" was publicated between the 3rd to 5th of April 2002.

    That is typically for the Massachusetts Institute of Technology (MIT) and its sponsors. Definitely, we do not need plagiarists and fraudsters to interpret our original and unique works of art and therefore present the facts in our clarifications and investigations.
    But we could do it better again and again, even in relation to the used terminology and terms, the interplay between these terms, and common interpretations in order to set the foundations for subsequently presenting work on truly creative and inventive, original and unique expression of ideas.

    The statements about context instantiated as

  • situatedness,
  • embodiment, and
  • immersion

    are not correct, because we already do have Cyber-Physical System (CPS) with the deliberative and reactive respectively hybrid Immobile Robotic System (ImRS or Immobot) and Holonic Agent System (HAS) based on InteRRaP, that the authors missed completely. For example, in relation to situatedness and embodiment "the focus of attention of immobile robots is directed inward, toward maintaining their internal structure, in contrast to the focus of traditional robots, which is toward exploring and manipulating their external environment".
    In addition, context formalization and Arrow Logic (AL) are foundations of the Arrow System of the TUNES project.
    Evoos has all properties required by Agent Chameleon. We cannot see any new matter in relation to the idea, concept, and foundation, and also expression of idea, but only an editing of our work of art without referencing.
    Also note at this point that the notion of CPS was not known in 2003, even at the MIT, because it was introduced around the year 2005 (see the Clarification of the 18th of July 2021 once again).

    The statements about mind and body, and the dichotomy just only copy the chapter 3.2 Funktionsweise eines Gehirns==Functioning or Operating principle of a Brain of The Proposal, which collectively has both approaches as basis of Evoos, "the provision of context for the agent mind, [...], must be implemented [...]".
    Later we even argued and clearly showed on the basis of the field of Algorithmic Information Theory (AIT) that there must be a

  • physical environment before something begins to tick in this (observable) universe even if it is the undecideable interplay of chaos and order of some subatomic particles (e.g. Heisenberg) and their relations or connections (e.g. Einstein), and
  • certain minimal amount of substrate before information can be processed.

    Besides this the mobile agent cannot be seen as an artificial mind, because the underlying agent architecture is only for IAS but not CAS.

    In the chapter 5 Zusammenfassung==Summary C.S. proposed an assignment of the physiological senses and the muscles. But there are at least 2 important details:
    Firstly, C.S. assigned the physiological senses and the muscles to both the physical and virtual functions, informations, and information spaces deliberately:

  • das Fühlen - die Tastatur und die Maus
    the feeling - the keyboard [physical force, or information] and the mouse [virtual, or digital force, or information]
  • das Hören - das Mikrofon, die Netzwerkkarte und das Modem
    the listening - the microphone [physical force, sound, or information], the network card and the modem [virtual, or digital force, sound, or information]
  • das Sehen - die Videokamera und der Scanner
    the seeing - the video camera [physical image, or information] and the scanner [virtual, or digital image, or information]
  • die Muskeln - der Monitor und der Drucker
    the muscles - the monitor [virtual, or digital force, image, or information] and the printer [physical force, image, or information]
  • das Sprechen - der Lautsprecher, die Netzwerkkarte und das Modem
    the speaking - the speaker [physical force, sound, or information], the network card and the modem [virtual, or digital force, sound, or information]
  • die bereits vorhandene Gehirnmasse und -funktionalität - das BIOS und die CPU
    the already existing brain mass and functionality - the BIOS [virtual software or digital software or information processing] and the CPU [physical hardware or information processing]
  • der Puls - der CPU-Takt
    the pulse [for the physical information space and processing] - the CPU clock [for the virtual information space and processing or digital information space and processing]
    In this way, C.S. has not only merged but even fusioned the real or physical, and virtual or metaphysical (information) spaces, environments, worlds, and universes respectively realities to one (information) space, environment, world, and universe respectively our New Reality (NR) (spacetime fabric) creatively and conceptually.
    Secondly, C.S. has made this assignment as a symmetric interface of the agent system, including a symmetric Multimodal User Interface (MUI).

    Doubtlessly, autonomic entity respectively autonomic computing, and also mutation, migration, and evolution point directly to our Evoos, including intelligent agent

  • deliberative, reactive, and hybrid, and
  • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot), and also
  • Artificial Life (AL),
  • Self-Organization (SO),
  • Holonic Agent System (HAS),
  • Multi-Agent System (MAS), and also
  • Emotional Intelligence (EI), or Emotive Computing (EmoC) and Affective Computing (AffC),
  • Believable Agent System (BAS), and
  • Intelligent VE (IVE or IntVE)

    Obviously, Evoos can have an IVE as User Interface (UI) (see the documents "Intelligent agents: Theory and Practice" and "COGs: Cognitive Architecture for Social Agents" quoted above).
    Obviously, Cognitive Immobile Robotic System (CImRS) with VE and IVE, but that is already our Evoos.
    reflective Distributed os Aperion (Apertos (Muse)) very similar to TUNES OS is the operating system already used for Mobile Robotic System (MRS) robotic dog, and companion, pal, or partner Sony Artificial Intelligence roBOt (AIBO), which was introduced in 1999 and is related to the agent architecture Tok for Oz project IVE and believable agent-based system.

    In relation to mobile application and mobile migrating agent see the quotes related to the very well known TELESCRIPT of the 2D VE Magic Cap in the related quote of "Intelligent Agents: Theory and Practice" above "Agents are software processes, and are mobile: they are able to move from one place to another, in which case their program and state are encoded and transmitted across a network to another place, where execution recommences.", the document titled "Teleporting - Making [X Window System] Applications Mobile" and publicated in 1994 quoted above, and also the document titled "The FIPA-OS agent platform: [...] Open Standard" and quoted in the upcoming Clarification of the 13th of April 2022 FIPA-OS "A common (but by no means necessary) attribute of an agent is an ability to migrate seamlessly from one platform to another whilst retaining state information, a mobile agent."

    We also note that the related document titled "Reality and Virtual Reality In Mobile Robotics" referenced in the document titled "Agent Chameleon" and quoted above is also based on Social Robot Architecture (SRA), Agent Factory, and animation of agents in a Virtual Reality Environment (VRE).

    With Mobile Agent-Based System (MBAS or MobABS), Holonic Agent-Based System (HABS), and IVE, such as TELESCRIPT with CoMMA-COGs, and Agent Chameleon without NEXUS we already have everything and merely missing are AR and AV.
    But the

  • field of Cyber-Physical System (CPS), which already merged the physical and virtual worlds at least since immobot in 1996, and
  • Virtual Object System (VOS), which already existed in 2002 and is described as "flexible, distributed object networks for a variety of purposes, but our primary application is multiuser collaborative virtual environments" and "primary application is multi-user 3D graphics environments", including AR, and immersive VE or VR.

    And Agent Chameleon clearly lists Collaborative Virtual Environment (CVE) in general and Distributed Intelligent Virtual Agent (DIVA) in particular, which is the predecessor of Virtual InTelligent Agents with Logic (VITAL), which again also lists CoMMA-COGs in relation to IVE (see the related quote above).
    Evoos is a SoftWare Agent-Based System (SWABS or softbot), Holonic Agent-Based System (HABS), Mobile Agent-Based System (MABS), Autonomous System (AS) (e.g. Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot)), Robotic System (RS) (deliberative and reactive (e.g. subsumption architecture), Immobile Robotic System (ImRS or Immobot), and Mobile Robotic System (MRS)), and so on, and also with VE as symmetric Multimodal UI already includes Agent Chameleon.

    Evoos is based on Virtual Machine (VM), Dos Asperion (Apertos (Muse)) and AIBO, Dos TUNES OS, immobot, etc..
    Obviously, Evoos can be immersed in a VE, embodied in a robot, or situated on a desktop computer (e.g. Personal Computer (PC)) and a mobile computer (e.g. Personal Digital Assistant (PDA)).
    In fact, not Agent Chameleon project lays the foundations for mobile lightweight agents which may inhabit various embodied states, but our Evoos does this already, besides all the other things, as discussed, explained, and shown in this clarification.

    Evoos is fractal or holonic, is immobot and CPS, and is connected to quantum computing, DNA, cell, brain, neural network, "Cell division and migration in a 'genotype' for neural networks [] (Cell division and migration in neural networks)", and molecular biology, nanotechnology and molecular physics, and Synthetic Reality (SR or SynR), is foundation of PAL, which has SpaceTime in the Specialists component of the Reflective Agents with Distributed Adaptive Reasoning (RADAR) system architecture, which shows once again the fusion of the real and virtual worlds to one world by Evoos.
    In addition, Evoos was first attempt or step of cybernetic self-portrait, also called digital twin in relation to Cyber-Physical System (CPS) and Metaverse Roadmap about our Ontoverse (Ov).

    Do we have to say more in this dubious case as well?

    Evoos added the

  • ontology,
  • user as agent,
  • foundation of real teleportation,
  • foundation of SDN, NFV, VFN,
  • etc.,

    and

  • got rid of the multiple information spaces in favour of one New Reality and one Ontoverse (Ov),

    through the integration of the reflective Doss Aperion (Apertos (Muse)) and TUNES OS Evoos added the

  • Ultra Large Distributed System (ULDS) massively distributed, loosely coupled in addition to immobot massively distributed, tightly coupled,
  • massively multiuser,
  • Collaborative Virtual Environment (CVE),
  • Distributed Virtual Environment (DVE) or Networked Virtual Environment (NVE)
    • Massively Multiuser Virtual Environment (MMVE),
      • Massively Multiplayer Online Game (MMOG),

    through the integration of the Holonic Agent-Based System (HABS) Evoos added the

  • ROC,
  • foundation of mSOA,

    {?correct or bad explanation} through the integration of the CPS and CVE VOS Mixed Reality (MR) Evoos added the

  • Semantic Reality (SR or SemR)
  • and so on,

    which are totally different things.

    We quote a second document, which is about the fields of Mobile Agent System (MAS or MobAS), IE, and IVE, and was publicated in 2003: "Agent Chameleons: Virtual Agents [Powered by] Real Intelligence
    [...]

    Introduction
    "You know I always thought unicorns were fabulous creatures too, although I never saw one alive before." "Well, now that we have met," said the unicorn, "If you'll believe in me, I'll believe in you."
    Lewis Carrol, "Through the looking glass[, and What Alice Found There]"[, 27th of December 1871]
    The Agent Chameleon Project strives to develop the next generation of agents, autonomic entities that can seamlessly migrate, mutate and evolve on their journey between and within physical and digital information spaces. This challenges the traditional boundaries between the physical and the virtual through the empowerment of mobile agents. Three key attributes (mutation, migration and evolution) underpin the Agent Chameleon concept.
    [...]
    Participants within the Agent Chameleons experience will engage in humancomputer collaborative activities that bridge multiple diverse digital information spaces. By imbuing artificial entities, engaged in this collaboration, with knowledge of their user and the user's environment, we strive to improve the quality of the experience offered to the user.

    Related Research
    [...]
    This research resonates with work undertaken within the mixed reality field with endeavours such as the Equator project [2 [Sensible, Sensable and Desirable: a Framework for Designing Physical Interfaces. February 2003], 4 [ Lessons from the Lighthouse: Collaboration in a Shared Mixed Reality System. January 2003]] and the Can You See Me Now project [10 [Where on-line meets on-the-streets: Experiences with mobile mixed reality games. 5th to 10th of April 2003]].

    A Context for Adaptive Agents
    [...]
    This research and the sister NEXUS project [18 [NEXUS - A Singularity Between the Real and the Virtual. 2003]] seeks to extend the functionality of such an agent by developing the reference of the agent being inherently linked to our reality. For example, gestures of an avatar in a VR space are fundamentally referenced in our physical reality. Similarly, the motion of the agent across numerous screens is based on realising a sense of mobility in physical space. The screen where we see the avatar represents a window through which the avatar can interact with us, not uniquely a window through which we can view the virtual space as is generally understood. The primary reference is the here and now, not something in some virtual space elsewhere.
    Agent Chameleons aims to deliver a framework that enhances Human Computer Interaction (HCI). Specifically, we envisage the Agent Chameleons as being a basis for the delivery of a new breed of pervasive and immersive applications.

    [...]"

    Comment
    First of all, we recall that the field of Augmented Reality (AR) is an extension of a semi-immersive system where important information is available both from the physical and virtual world simultaneously.

    The Equator project was started around August 2000 and was basically about multimedia systems, which integrate real location, virtual information, and 3D visualization, but not about proper Augmented Reality (AR), at least in the beginning.
    For example, one of the two major exemplary works is the Can You See Me Now? (CYSMN) project, which

  • is merely a location-based urban chase game, where (real) performers on the streets of a city use handheld computers, Global Positioning System (GPS) navigation devices, and walkie talkies to chase online players, who move their avatars through a virtual model of the same town, which again is shown on the handheld computers of the performers,
  • is built on the Equator Integrated Platform (EQUIP) architecture, and
  • was presented the first time on the 30th of November 2001.

    But only the gameplay combines real places with virtual informations on a mobile computing device, but not a real environment augmented by a virtual overlay.

    See once again the comment to the quoted document before.

    We quote a document, which is about IBAS and Intelligent Mixed Reality Environment (IMRE), and was publicated in 2003: "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents
    Abstract This paper seeks to erode the traditional boundaries that exist between the physical and the virtual world. It explores mixed reality experiences and the deployment of situated embodied agents that offer mediation in the control of, and interaction between, avatars. The NEXUS system is introduced which facilitates the construction and experimentation with mixed reality multi-character scenarios. The behaviour of such characters or avatars is governed by a BDI agent architecture that can effectively sense both the real and the virtual world overlay. Within this paper we describe the NEXUS infrastructure together with the technology set that envelops it. [...]

    Introduction
    This paper describes NEXUS, a framework that supports the fusion of the physical and the virtual, creating a single world in which people can interact with virtual entities in their own space. We envisage NEXUS as a place in which virtual and physical information spaces become seamlessly entwined.
    [...]
    The degree of interaction between these two spaces is not simply a matter of people controlling the form of a virtual space, or even one of artificial entities that control the form of a physical space. Rather, NEXUS is viewed as a fusion of physical and virtual spaces in which a myriad of entities, virtual and physical interact and manipulate the shared space, dismantling the traditional barrier that has existed between the real and the virtual.
    This work strives to extend the contextdependent integration paradigm between virtual spaces and physical worlds as found in current augmented reality research by investigating how a seamless functional transition (rather than primarily perceptual) can be achieved. This is achieved through employing intentional agent technology to develop a seamless human computer interface between the real and the virtual. Agent Chameleons [8] [15 [Agent Chameleons: Migration and Mutation within and between Real and Virtual Spaces. 3rd to 5th of April 2002]] [16] lays the foundations for mobile lightweight agents which may inhabit various embodied states. NEXUS offers an architecture that facilitates the creation of autonomous characters and mixed reality experiences which are mediated by intentional agents.
    Two key issues are developed in this paper. Firstly, the extension of the common approach to deal primarily with our perception of mixed reality environments to encompass stronger functional features facilitated through such environments and secondly, the development of intentional multiagent technologies in this field of research.

    Related Work
    [...]
    Augmented or mixed reality research to date has primarily focused on the integration of the virtual space (either 2D or 3D objects) with the physical world through an overlay through such technologies as see-through head-mounted displays. [...] problem of facilitating the perception of spatial and temporal representations [...].
    Existing work in this field primarily aims to seamlessly merge our perception of the two realities into one coherent stimulus. Our interaction and our understandings of this interaction can however be developed further. The NEXUS project strives to extend this context-dependent integration paradigm between virtual spaces and physical worlds by investigating how a seamless functional transition (rather than primarily perceptual) can be achieved.
    [...]
    The NEXUS proposal resonates with work undertaken within the Equator project which explores mixed reality mobile computing 3D virtual environments and user experiences of such [...]. Mobile mixed reality computing and 3D visualization as exemplified by the "Can You See Me Now" project [11 [Where on-line meets on-the-streets: Experiences with mobile mixed reality games. 2003]] which has pushed the mixed reality mobile computing envelope yet further.
    Recent work has investigated the control of avatars and the mediation of mixed reality experiences through the use of agents. [No references are given.] The need for multi-agent techniques in the control of multi user virtual worlds has been recognised. [...] demonstrated this via the use of Deep Matrix, a VRML based multi-user environment system but, however, they did not deploy multiagents. More recent work has actively sought to use agents in this regard [...] who have again used VRML as the 3D technology but with an extended BDI architecture [...] achieved through Distributed Logic Programming [10]. [...] [21 [Using the BDI architecture to produce autonomous characters in virtual worlds. 2003]] [...] [These works are only Virtual Reality (VR), but not Mixed Reality (MR).]

    The NEXUS Experience
    The NEXUS experience aims to contribute two fundamental aspects of augmented reality research, that of developing the capability set of the avatar itself through the use of Belief-Desire-Intention agent control strategies, and to seek to develop the functional aspects of human-in-the-loop augmented realities. The objective is a functional seamlessness in how users can exploit augmented realities.
    [...]
    [...] The advantage of using the virtual overlay and it seamlessly merging with reality through context should allow the integration of those features that are otherwise difficult or impossible in reality (i.e. no gravity, morphing, cloning). This applies to both the perception of this mixed reality scenario and the functional attributes available to the participants within it. Limitations on achieving real-world-like degrees of resolution in virtual artefacts embedded in real environments through mixed reality becomes less of an issue with augmented functionality capabilities. [...]
    The virtual world is viewed as a mere extension (not replication) of the physical and conversely the physical is viewed as an extension of the virtual. [...] Envisage a scenario whereby an avatar within a Collaborative Virtual Environment picks up a torch and shines it toward the user in the physical. Imagine that the virtual light were to continue in the form of illumination within the physical world. Conversely the physical user looking into the dimly lit virtual world may retrieve a torch and shine it into the virtual. [...] The functional interface between the two realities should become seamless.
    In contrast, consider several computer screens juxtaposed. Imagine virtual characters that apparently were aware of entities and events beyond their world. One avatar may well gesticulate to another avatar present on the adjacent screen. Avatars may also point at, or make reference to, objects contained in the physical world. In so doing, they exhibit environmental awareness beyond the immediate parochial periphery of the world they currently inhabit.
    Such projects as Agent Chameleons [8] [15] [16] aim to develop autonomous digital agent assistants which can act like a ghost friend and move between embodied containers such as robots, virtual reality avatars and animated agents on desktops and PDA's. NEXUS extends the functionality of such an agent by developing the reference of the agent being inherently linked to our reality. For example, gestures of an avatar in a VR space are fundamentally referenced in our physical reality. Similarly, the motion of the agent across numerous screens is based on realising a sense of mobility in physical space. The screen where we see the avatar represents a window through which the avatar can interact with us, not uniquely a window through which we can view the virtual space as is generally understood.
    NEXUS participants are able to engage in human-computer collaborative activities that bridge multiple diverse digital information spaces. By imbuing artificial entities engaged in this collaboration with knowledge of their user and the user's environment, we strive to improve the quality of experience offered to the user.
    While the perceptual fusion of both real and virtual environments have been and continue to be investigated [9 [Perceptual Issues in Augmented Reality, SPIE Volume 2653: Stereoscopic Displays and Virtual Reality Systems III. January - February 1996]], it is not developed within this work.

    Fig 1: The NEXUS Architecture
    World
    Physical
    Virtual

    Physical
    Display
    Touch Screen
    Camera

    Virtual
    Java 3D
    OpenGL
    ARToolKit
    JARToolkit [Java wrapper for ARToolKit]

    Agent Layer
    Viewer.agt
    MyWorld.Actuator
    [BDI] Agents

    Agent Factory Platform
    Runtime Environment
    Development Environment

    [...] The underlying architecture consumes pre-existing of-the-shelf software systems with the innovation manifesting itself through the integration of these complex disparate software components. [What a nonsense.]
    [...]
    [...] Residing on top of these agent technology layers is the world layer, which incorporates and integrates technologies for mixed reality experiences and various interaction modalities. Two substrata exist, that of the physical world and the virtual. The former supports the use of multiple juxtaposed touch sensitive screens, together with digital cameras, microphones, tracking/sensing technologies and micro head-up displays for augmented VR viewing. The latter supports the display of 3 Dimensional VR spaces. [...] Collectively this enables the following mixed reality experiments.

    Two Mixed Reality Experiments
    The Nexus Space is a physical space that is imbued with microphones, touch sensitive screens, speakers, cameras, and location sensing devices. The space contains a number of screens that comprise the various windows through which human and artificial entities are able to interact. These screens are augmented with CCD cameras that can be used to provide a looking-glass type of effect where required. Additional support for interaction is provided through the use of micro head up displays for augmented VR viewing.

    [...]
    [...]

    These experiments are sufficient to illustrate how intentional agents may be incorporated and used as the control apparatus for autonomous characters. Rather than the avatars being merely empty vessels or containers they are viewed as membranes that embody a rich reasoning machinery that can perceive, reflect upon and subsequently effect the environment.

    Conclusions
    Augmented reality provides a compelling innovative means of integrating the real and the digital in order to facilitate our accessing the digital information space. To achieve a successful design, the system must incorporate those features which facilitate rather than confuse, focus rather than distract.
    The work presented here argues for the development of shows a more functional interface between the user(s) and the augmented reality through the use of intentional agent-based deliberative systems and mechanisms which provide for a more seamless integration of the real and the virtual.
    [...]

    Acknowledgements
    The work undertaken as part of the Agent Chameleons project [...]."

    Comment
    Obviously, Evoos and Evoos with VOS.
    As we said in the comment to the quoted document "Agent Chameleons: Agent Minds and Bodies" above, not the Agent Chameleon project lays the foundations for mobile lightweight agents which may inhabit various embodied states, but our Evoos does this already, besides all the other things, as discussed, explained, and shown in this clarification.

    As we said above, if one takes a closer look, then it becomes obvious that the Agent Chamelon architecture is based on Evoos, including Intelligent CVE, believable agent, and migrating, mutating, and evolving agent.
    Note that Agent Chameleons was only Virtual Reality (VR), when C.S. added Mixed Reality (MR) to Evoos in 2002 through the CVE VOS (Virtual Reality (VR) and Augmented Reality (AR) hence Mixed Reality (MR)). But in this case referencing the VOS is not so relevant at all.
    Also note in this context that the Social Interaction Framework for Virtual Worlds (SIF-VW) and the Equator project based on the fields of 3D visualization and Virtual Reality (VR), but both are not related to proper Augmented Reality (AR).
    See also chapter 8.3 Wachstum des Betriebssystems==Growth of the Operating System of The Proposal for "Intentionalen Programmierung==intentional programming".

    Furthermore, Evoos includes immobot and hence Cyber-Physical System and fusions the real or physical, and the virtual or metaphysical already, or said in other words NEXUS.
    But the exemplary utilization of Evoos respectively the NEXUS applications are quite nice.

    As we explained in the comment to the quoted before, C.S. has not only merged but even fusioned the real or physical, and virtual or metaphysical (information) spaces, environments, worlds, and universes respectively realities to one (information) space, environment, world, and universe respectively our New Reality (NR) (spacetime fabric) creatively and conceptually.
    But C.S. has driven the approach of Cyber-Physical System (CPS) and Cognitive Agent System (CAS), including Autonomous System, Robotic System, CoMMA, SIF-VW, Agent Chameleon and NEXUS, further by adding the user as well and not only her, his, or their avatar and making the user an agent as well and turned the whole approach inside out by applying ontology, Digital Physics and Rechnender Raum==Computing Space, hypercomputing, Theory of Everything (ToE), and quantum computing.
    C.S. is included as a believable agent and the creator of the Ontoverse. We also recall that our OS is also a belief system, which bridges the gap (see the Caliber/Calibre). And no, C.S. is neither a chameleon nor a unicorn.

    If we apply ontology, Digital Physics and Rechnender Raum==Computing Space, and so on, then the observable universe becomes the embodiment and the real and virtual environments become one single environment, that has a unified context and no changes of the context, and does not differentiate between reality and virtuality. This is our Caliber/Calibre.

    Obviously, they found out 1 year later after their first publication about the Agent Chameleon that we were far ahead and came up with that next plagiarism.
    There is no conflict of interests, because too many original and unique expressions of ideas were presented with Evoos and too many years lie between the publication of Evoos on the one side and the publications of Agent Chameleon and NEXUS on the other side.

    We quote a poster, which is about the fields of Intelligent Agent-Based System (IABS) and MAS of the DARPA and subcontractors, and research institutes, and was publicated in 2003: "Personalized Assistant that Learns
    The mission of the PAL program is to radically improve the way that computers support humans, by enabling systems that are cognitive, i.e., computer systems that can reason, learn from experience, be told what to do, explain their actions, and respond robustly to surprise. PAL is developing prototype cognitive systems that can act as assistants for commanders and staff.

    [Image:] PAL
    Past [] Interact
    Present [] Introspect
    Past [] Observe [] Present [] Act [] Future [] Anticipate
    Time[line]

    Through the PAL program, DARPA intends to make major and long-term contributions to the field of cognitive systems, by developing

  • Long-term scientific and technical innovations in machine learning, reasoning, perception, and multi-modal interaction
  • Prototype systems with the best technologies to create integrated cognitive assistants
  • A progression ofincreasingly more capable and robust prototypes to be tested and used in real world situations

    Current software systems are painstakingly programmed for every contingency. This still leaves them unable to deal with changing and novel situations. PAL is developing software systems that learn on their own that adapt to changing situations without the need for constant reprogramming.

    PAL Goals, Approach, and Transition
    The program is focused on developing technologies that

  • Enable machines to learn and improve their basic functionality through experience (vs. through being explicitly programmed)
  • Can represent goals, system structure, and behavior, to support learning and user interaction.
  • Allow the software to be instructed and guided using natural human-oriented communications (e.g., natural language, diagrams, and gestures)
  • Have the ability to use visual and auditory cues to understand the user's situation (who is in the meeting, who is speaking, etc.)
  • Are integrated, resulting in fully functioning systems

    The strategy is to build two system versions: one by SRI International and a team of 25 subcontractors, called CALO: Cognitive Assistant that Learns and Organizes, and one by Carnegie Mellon University (CMU) researchers, with SRI as the integrator, called RADAR: Reflective Agents with Distributed Adaptive Reasoning. To speed development, these first systems are being built as assistants in the office domain, allowing the developers to be the initial users.
    Technologies developed under the PAL program will make military decision-making more efficient and more effective at all levels. For example, today's command centers require hundreds of staff members to support a relatively small number of key decision-makers. As PAL develops a new capability for "cognitive assistants," those assistants will reduce the need for large command staffs - enabling smaller, more mobile, less vulnerable command centers.
    Transitions are envisioned first in the experimental command and control environment, followed by the joint exercise environment, and finally to the operating environment in the 2010 timeframe.

    RADAR - Reflective Agents with Distributed Adaptive Reasoning
    The RADAR project is building and empirically evaluating a cognitive assistant that learns to help a human user in situations of intense information overload. The specific kind of information overload considered by RADAR is a flood of inbound email messages in a "crisis" situation.
    RADAR's user is constantly bombarded with situation reports, requests for briefings, and various other requests for information and information-related work products. Inbound email messages contain updates of constraints and other information that the users must identify, understand, and process to execute their job. This sort of information overload in a crisis situation is representative of a large class of problems that arise in both the military and commercial worlds.
    RADAR can be viewed as a cognitive prosthetic. It does not get in the way of a user in situations where the user has strengths, or where the user wants to take control. But RADAR does provide cognitive assistance in those situations in which the user has limited capacity to handle information, or in which the user has no interest in exercising control.
    As part of the overall PAL program, RADAR's overriding theme is learning-in-the-wild. As such, RADAR must be usable by a person with no special training, and it must learn during normal use.

    [Image:] RADAR system architecture
    Corpus/crises → Email → NL/Categorizer
    Email → NL/Categorizer
    NL/Categorizer → Email → RADAR Console - Email client
    NL/Categorizer → Tasks → Task Manager
    Task Manager → Tasks → RADAR Console - Email client
    Task Manager → Tasks → Specialists

    Specialists
    SpaceTime
    BriefingMaker
    WbE [WBE]
    VIO
    CMRadar-Rooms

    Simulated world ↔ Specialists ↔ User
    User ↔ RADAR Console - Email client

    Scone & Ad Hoc KBs
    Lexicon/ontology
    Training data
    Models
    Raw user logs
    World state

    [Image:] Email message with RADAR-learned tasks extracted

    RADAR offers a number of tightly integrated capabilities to help in this situation, each of which embodies its own specific form of learning-in-the-wild, all sharing common infrastructure and knowledge representation, and all available through a common Microsoft Outlook -based RADAR console.
    An important part of RADAR is a machine-learning-based email message classification engine that automatically classifies each email message according to the task requests it contains, allowing the user to move from an email-oriented-workflow to a task-oriented workflow, improving mental focus and retaining more information in the task-performance context. [And once again: Bingo!!!]
    Other RADAR tools support communication with the outside world: the Virtual Information Officer [(VIO)] is able to learn how to update web sites by interacting with the human user in English; the Workflow By Example [(WBE)] subsystem learns how to automate bulk edits of web sites by watching the human user go through a single manual example.
    RADAR is driven by an annual evaluation that tests how the system's learning-in-the-wild helps subjects respond more quickly and accurately to information overload under crisis conditions. Test subjects are asked to deal with unexpected events in the midst of a flood of email and other information. The system is tested before and after a learning period, providing a direct evaluation of its ability to learn in-the-wild.

    CALO: the Cognitive Assistant that Learns and Organizes
    Military commanders and staff work in dynamic environment. CALO must be able to adapt to this environment, learning new concepts, new tactics, and new tasks. CALO learns through observation and inference, and through user advice and instruction - both implicit and explicit. This learning must occur "in the wild", meaning that the system learns through its exposure to the environment, without the intervention of human programmers.
    This is a radical research agenda. Learning-in-the-wild requires significant rethinking of most of the approaches in the machine learning field, often making them online and interactive, and always embedding them in the system's reasoning and user interface components so that they can impact system knowledge and functions. In turn, this requires rethinking the reasoning, user interface, and knowledge representation technology.
    Therefore, CALO must also take on a radical development agenda: it must deliver learning technology to users in a form that is valuable and compelling enough to make it a central part of their daily work. This is the only way that CALO can experience enough to learn in the wild.
    CALO is being developed as an office assistant so that the CALO team can use it every day to do their work, see how it learns, and continually improve its technology and utility. The goal is to apply this learning technology to key military problems. For example, CALO learns to schedule meetings, taking into account people's roles, availability, and scheduling preferences. The technology that enables CALO to learn which people in the organization should attend a meeting is relevant to the problem offorming ad hoc teams in dynamic military situations. Other examples of CALO learning include recognizing user work activities, creating new ones as they arise, and adapting existing ones as they change.
    CALO then learns to automatically associate people, email, and documents with relevant activities.

    [Image:] CALO shows the user what it learned

    CALO learns to prepare its user for meetings, finding and organizing the information that is relevant to a particular meeting. CALO learns to create records of meetings, focusing on action items and participant interaction.
    CALO's architecture emphasizes the integration of learning into all of the functional components.

    [Image:] CALO's architecture integrates learning
    Learming
    Task Learning

    Task Manager
    Task Exec
    Plan Reasoner
    Collaborative Problen Solver
    Coordination Mgr

    Task Learning

    Knowledge Manager
    Query Mgr
    Update Mgr
    Timeline Mgr
    Memory Mgr

    Query Relaxation

    Meta Learning

    Knowledge-Enhanced Learning

    Collective Relational Learning

    Timeline Database (Episodic Memory)
    Knowledge Base

    Interaction Manager
    IRIS
    NL/Speech
    Interpretation
    Explanation

    Vocabulary Learming

    Perception Manager
    Participant Tracking
    Meeting Activity Recognition

    Multimodal Fusion

    Cyber Manager
    Participant Local Cyber Environment
    Remote Cyber Environment

    RADAR Console

    CALO's learning-in-the-wild capability is evaluated annually using tests similar to the standardized tests given in schools. CALO systems are tested before and after they have gone through a period oflearning through interaction with real users. Differences in test performance are attributable to learning-in-the-wild."

    Comment
    Obviously, Evoos. PAL and CALO are referenced in the section Intelligent/Cognitive Interface of the webpage Links to Software.

    The kickoff of the CALO project was in May 2003.

    Obviously, Microsoft wanted to steal our Evoos another time in this way.

    Already standard are our Natural Language Processing (NLP) and Natural Image Processing (NMP), Intelligent Personal Assistant (IPA), and Robotic Automation (RA), and based on SoftBionics (SB).

    We quote an online encyclopedia about the subject lifelog: "A lifelog is a personal record of one's daily life in a varying amount of detail, for a variety of purposes. The record contains a comprehensive dataset of a human's activities. The data could be used to increase knowledge about how people live their lives."

    Comment
    The lifelog paradigm is also called quantified self. As we already explained in the years 2016 to 2018, we go beyond the quantified self or lifelog paradigm with the creation and introduction of

  • self-reflection, self-image, or self-portrait,
  • enhancement,
  • extension,
  • qualified self,
  • etc.

    based on cybernetics.

    We quote a webpage, which is about the LifeLog project of the DARPA and was publicated in the year 2003: "LifeLog
    Objective:
    LifeLog is one part of DARPA's research in cognitive computing. The research is fundamentally focused on developing revolutionary capabilities that would allow people to interact with computers in much more natural and easy ways than exist today.
    This new generation of cognitive computers will understand their users and help them manage their affairs more effectively. The research is designed to extend the model of a personal digital assistant (PDA) to one that might eventually become a personal digital partner.
    LifeLog is a program that steps towards that goal. The LifeLog Program addresses a targeted and very difficult problem: how individuals might capture and analyze their own experiences, preferences and goals. The LifeLog capability would provide an electronic diary to help the individual more accurately recall and use his or her past experiences to be more effective in current or future tasks.

    Program Description:
    The goal of the LifeLog is to turn the notebook computers or personal digital assistants used today into much more powerful tools for the warfighter.
    The LifeLog program is conducting research in the following three areas:
    1. Sensors to capture data and data storage hardware
    2. Information models to store the data in logical patterns
    3. Feature detectors and classification agents to interpret the data
    To build a cognitive computing system [cognitive system], a user must store, retrieve, and understand data about his or her past experiences. This entails collecting diverse data, understanding how to describe the data, learning which data and what relationships among them are important, and extracting useful information. The research will determine the types of data to collect and when to collect it. The goal of the data collection is to "see what I see," rather than to "see me". Users are in complete control of their own data collection efforts, decide when to turn the sensors on or off, and decide who will share the data.

    Program Impact:
    LifeLog technology will be useful in several different ways. First, the technology could result in far more effective computer assistants for warfighters and commanders because the computer assistant can access the user's past experiences. Second, it could result in much more efficient computerized training systems - the computer assistant would remember how each individual student learns and interacts with the training system, and tailor the training accordingly.

    References:
    Vannevar Bush's Memex (1945)
    [...]
    J.C.R. Licklider's Oliver (1968)
    [...]
    Donald Norman's Teddy (1992)
    [...]"

    Comment
    Obviously, Evoos.

    We quote the homepage of the Metaverse Roadmap foresight project, which was copyrighted 2009: "Creation of the Roadmap 1.0 began with our invitational Metaverse Roadmap (MVR) Summit May 5-6 2006 [...].
    There a diverse group of industry leaders, technologists, analysts, policy makers, academics and creatives outlined key 3D web visions, scenarios, forecasts, plans, opportunities, uncertainties, and challenges, for both a ten year planing horizon (2006-2016) and twenty year speculation horizon (2006-2025).
    From June 2006 to June 2007 we worked to capture, research, and summarize their inputs. We started with 200 pages of raw Summit transcript (plus supporting documents) distilled this into 75 pages of Metaverse Roadmap Inputs (in 19 foresight categories) and condensed it further still into a 23 page Metaverse Roadmap Overview. We hope you find this information valuable as you survey this fascinating and rapidly moving new global social and technology space.
    [...]"

    Comment
    We have often found the company SRI International in relation to our works, which that company should not know at all, for example in relation to Intelligent Personal Agent (IPA) (e.g. PAL and CALO referenced in the section Intelligent/Cognitive Agent of the webpage L2S of the website of OL, and also Siri based them, all based on Evoos) and Cognitive Agent System (CAS), and highly suspicious projects of the DARPA. Obviously, it also knew what we are doing in the field of Mixed Reality (MR).
    But as one can see, despite that we had everything ready around January 2006 and realized and confirmed it in March 2006, but somehow waited 6 months with the publication, they did not really knew what we have created, exactly as in the other highly suspicious cases of the Cognitive Grid and Cyber-Physical System (CPS).
    Now, our fans and readers should ask themselves why at least these 3 actions happened at virtually the same time, but not in all the years before. Exactly, they spied on us, followed us, and cheated on us all the years.
    Howsoever, only the Metaverse Roadmap summit was in May 2006, but we already publicated in the end of October 2006. Furthermore, it was merely about the 3D Web and as can be seen with the following quote once again, they tried to catch up by plagiarizing our Evoos and OS in this case as well. In relation to the 3D Web, there was no integration with AR, MW, CPS, etc..
    Sadly to say, they failed as well, because we already discussed our Evoos since February or March 1999 and publicated and discussed The Proposal describing our Evoos on the 10th of December 1999, which is also the reason why we were so sure in relation to the delay of our publication of the OS in the end of October 2006: We noticed around 2005 that nobody was able to get around our Evoos, as we are explaining and proving in this clarification.

    We quote a document, which is the Metaverse Roadmap Overview and was publicated in June 2007: "What happens when video games meet Web 2.0? When virtual worlds meet geospatial maps of the planet? When simulations get real and life and business go virtual? When you use a virtual Earth to navigate the physical Earth, and your avatar becomes your online agent? What happens is the metaverse.

    Introduction
    Over the past year [a foundation] and its supporting foresight partners have explored the virtual and 3D future of the World Wide Web in a first-ofits-kind cross-industry public foresight project, the Metaverse Roadmap (MVR). We use the term Metaverse in a way that includes and builds upon Neal Stephenson's coinage in the cyberpunk science fiction novel, Snow Crash, which envisioned a future broadly reshaped by [immersive] virtual [reality] and 3D technologies.
    [...] Many helpful people from the IT, virtual worlds, professional, academic, futurist, and lay communities contributed ideas to the MVR. In its inaugural version, the MVR focuses on defining and exploring this major new social space. In future versions we expect to add industry-developed timelines for Metaverse technology development. [...]
    [...]

    Metaverse Definition
    The Metaverse is a complex concept. In recent years, the term has grown beyond Stephenson's 1992 vision of an immersive 3D virtual world, to include aspects of the physical world objects, actors, interfaces, and networks that construct and interact with virtual environments. [...] Here is one that seems as good a starting point as any: The Metaverse is the convergence of 1) virtually-enhanced physical reality and 2) physically persistent virtual space. It is a fusion of both, while allowing users to experience it as either.
    There is no single, unified entity called the Metaverse - rather, there are multiple mutually-reinforcing ways in which virtualization and 3D web tools and objects are being embedded everywhere in our environment and becoming persistent features of our lives. These technologies will emerge contingent upon potential benefits, investments, and customer interest, and will be subject to drawbacks and unintended consequences.
    [...] as new tools develop, we'll be able to intelligently mesh 2D and 3D to gain the unique advantages of each, in the appropriate context.
    [...]
    The emergence of a robust Metaverse will shape the development of many technological realms that presently appear non-Internet-related. In manufacturing, 3D environments offer ideal design spaces for rapidprototyping and customized and decentralized production. In logistics and transportation, spatially-aware tags and real-time world modeling will bring new efficiencies, insights, and markets. In artificial intelligence, virtual worlds offer lowrisk, transparent platforms for the development and testing of autonomous machine behaviors, many of which may be also used in the physical world. [...]
    In sum, for the best view of the changes ahead, we suggest thinking of the Metaverse not as virtual space but as the junction or nexus of our physical and virtual worlds.

    [...]
    [...] Due to the special physics of the nanocosm (efficiencies of ICT, nanotechnologies, and process automation based on these technologies), it is most reasonable to expect the great majority of these technology trends to continue accelerating over the time horizon of this roadmap. Prediction analysis, another foresight practice, has repeatedly shown that even the best long-range technology forecasts typically have only a 50% success [...]. rate

    [...]

    To construct our scenario set we selected two key continua that are likely to influence the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation to simulation; and the spectrum ranging from intimate (identity-focused) to external (world-focused).

  • Augmentation refers to technologies that add new capabilities to existing real systems; in the Metaverse context, this means technologies that layer new control systems and information onto our perception of the physical environment.
  • Simulation refers to technologies that model reality (or parallel realities), offering wholly new environments; in the Metaverse context, this means technologies that provide simulated worlds as the locus for interaction.
  • Intimate technologies are focused inwardly, on the identity and actions of the individual or object; in the Metaverse context, this means technologies where the user (or semi-intelligent object) has agency in the environment, either through the use of an avatar/digital profile or through direct appearance as an actor in the system.
  • External technologies are focused outwardly, towards the world at large; in the Metaverse context, this means technologies that provide information about and control of the world around the user.

    [...]
    Combining the two critical uncertainties gives four key components of the Metaverse future:
    Virtual Worlds
    Mirror Worlds
    Augmented Reality
    Lifelogging
    These four scenarios emphasize different functions, types, or sets of Metaverse technologies. All four are already well into early emergence, yet the conditions under which each will fully develop, in particular contexts, are far from clear.
    There are of course other types and functions of technology likely to influence Metaverse development which are not explicitly covered in our scenarios.
    Several of these minimally mentioned or neglected topics are likely to be major near-term influences, such as Internet Television (ITV) and Videoconferencing. Others, such as the Conversational Interface (CI) to the web may become key drivers only in the longer-term speculation horizon of the roadmap (2016 to 2025).
    [...]
    Recognizing the complexity of the Metaverse space, we nevertheless consider the following four major scenarios an excellent starting point for understanding our virtual and 3D digital future.

    [...]

    Virtual Worlds (Intimate/Simulation)
    Virtual worlds increasingly augment the economic and social life of physical world communities. The sharpness of many virtual and physical world distinctions will be eroded going forward. In both spaces, issues of identity, trust and reputation, social roles, rules, and interaction remain at the forefront.

    Issues and Technologies
    Discussion of the Metaverse usually begins with massively multi-user virtual worlds (VWs), a fast-growing space that is already mixing physical and virtual social, economic, and to a limited extent, political systems via both asynchronous single-user and realtime multi-user modes. [...]
    A key component of the VW scenario is one's avatar (or in multiplayer games, character), the user's personification in the VW. As in the physical world, capabilities accessible in digital space are contingent on the limitations of the avatar. [...]
    [...] They are digital versions of narratives set in "other realities" since the beginning of civilization. In the earliest years, the quality of textual narratives, story, and emotional appeal drove adoption. [...]
    There is a useful distinction between VW-based multiplayer games [...] and VW-based social environments [...]. [...]
    [...] the user retains some ownership rights to the objects, land, and other assets acquired in the world. The emergence of broader individual rights inside VWs [...] was discussed [...] as a new convergence between virtual and physical space. While inspiring, the vision [...] of an emerging independent cyberspace, with its own political and economic rules and jurisdictions, like any sovereign nation, was not echoed by MVR participants, who talked of increasing physical world regulation over virtual space in the foreseeable future.
    [...] as VW "syndication" emerges in coming years. Having more user freedom to move avatars, interfaces, and assets between worlds - subject to the need to maintain story integrity in gamebased worlds - was a common desire [...]. But to move beyond today's "Walled Gardens," not only new standards and syndicates, but better systems for user identity, trust, and reputation will be needed, to ensure player accountability to the unique rules of each world. [And once again: Bingo!!!]
    Both VW's and "mirror worlds" (virtual spaces that model physical space) offer object creation tools. But in multiplayer VWs, object creation is constrained by the setting and game rules. In mirror worlds, creation is constrained by the need to reflect reality. [...] computer-aided design and production of simple physical-world objects [...]

    What is life like in this scenario?
    [...]
    In the stronger version of this scenario, VWs capture most, if not all, current forms of digital interaction, from entertainment to work to education to shopping to dating, even email and operating systems, though the 3D aspects may remain minimally used in the latter contexts. Youth raised in such conditions might live increasingly Spartan lives in the physical world, and rich, exotic lives in virtual space - lives they perceive as more empowering, creative and "real" than their physical existence, in the ways that count most.
    [...] At the same time, the emerging Participatory Web [(Web 2.0)] is providing tools and platforms that empower the user to tag, blog, comment, modify, augment, select from, rank, and talk back to the contributions of other users and the world community. Tomorrow's 3D Participatory Web technologies will greatly enrich our virtual spaces. [...]
    [...] Will tomorrow's "Metaversans" require potential contacts (those seeking emails, profile info, or live contact) to teleport to the VW address of one of their beautiful virtual homes, with exteriors that display their public interests and values to the world? [...]
    [...] the ability of webcams to dynamically map the facial expressions of computer users onto their virtual world avatars was considered a probable near-term VW development. The ultimate expression of the VW scenario would include simulation of proprioception (body position), touch, scent and even taste, a form of immersive virtual reality. [...]
    A key enabler for the utility of avatars as representatives, screeners, assistants, etc. would be a Conversational Interface (CI) [...] a dialog platform sophisticated enough to support web queries and responses (text or voice) using seven or more word "sentences," approximating simple human conversation. [...] Empowering avatars with primitive conversational intelligence would allow us to use them as simple secretaries, agents, and customer support. Individuals could query your "digital twin" 24/7 to learn your public persona and current status, and a CI would promote universal access to and use of the 2D and 3D web, even for nonliterate youth in emerging nations. [And once again: Bingo!!!]
    [...]

    Mirror Worlds (External/Simulation)
    Mirror worlds are informationally-enhanced virtual models or "reflections" of the physical world. Their construction involves sophisticated virtual mapping, modeling, and annotation tools, geospatial and other sensors, and location-aware and other lifelogging (history recording) technologies.

    Issues and Technologies
    Unlike virtual worlds, which involve alternate realities that may be similar to Earth's or wildly different, mirror worlds model the world around us. The bestknown example of a mirror world (MW) is [...], a free, web-based, open-standards digital map of Earth. [...] is just one of a large class of mirror worlds, which are also known as geographic information systems (GIS). GIS systems capture, store, analyze and manage data and associated attributes that are spatially referenced to the Earth.
    Initially, MW maps were based on cartographic surveys, with informational overlays. Later maps were updated with satellite and aircraft imagery, and now some [...] are being augmented by ground-based imagery [...] to add ground-level images to the building models in our urban mirror worlds. [...] picture-based MWs [...] [This is not Augmented Reality (AR) and also not Virtual Reality photography or other nonsense, but called multi-perspective imaging and Information System (IS).]
    [...]
    Digital Earth systems [...].
    [...]
    Firms with GIS, sensor or virtual world strategies [...].

    What is life like in this scenario?
    [...]
    In coming years, the proliferation of location- and context-aware sensors will create smart urban and rural environments, and the quality of our mirror world simulations, augmented reality interfaces and object and user lifelogs (history recording systems) will steadily improve. Future classes of RFID and other sensors will allow the emergence of "local positioning systems" (aka location-based systems) that enable us to locate everything we care about in our environment (e.g, tools in the house, children in the neighborhood, friends on the planet) on a realtime MW map.
    [...]
    [...] teleport [...] [The cited document does not include the term teleport.]
    [...]
    In the longer-term time horizon, given a sufficiently robust model of the real world, complete with abundant live data sources and preferences and values maps of the inhabitants, mirror worlds will eventually come to offer a powerful method of testing plans through data mining and simulation. Business, environmental, and political strategists may use a mirror world system to check the plausibility of plans against a physical or virtual community's publicly expressed preferences and values. [...]
    Such a high-reflectivity model of Earth's visible and intangible aspects is outlined by David Gelernter in Mirror Worlds. Gelernter is optimistic that our coming data-rich geographic simulations can give us not only tree-level insight but also forest-level "topsight" into complex global systems, many of which are presently obscure.
    If the leading mirror world tech trend is towards increased data inputs (proliferating global sensors) and complexity and accuracy in our sims, the leading MW social trend may be efforts of the powerful to control access to the most useful new information. [...]

    Augmented Reality (External/Augmentation)
    In augmented reality, Metaverse technologies enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perception of the world.
    [...]

    Issues and Technologies
    [...]
    Augmented reality depends on the further development of intelligent materials and the "smart environment" - networked computational intelligence embedded in physical objects and spaces. [...] this vision of the so-called "Internet of things" moves well beyond today's primitive classes of RFID (radio frequency identification) tags. Concepts such as the "spimes" [...] (individually-identified objects that can be tracked through both time and space over their lifetime) or [...] "blogjects" (objects that keep a running public record of their condition and use) offer examples of the ways in which materials, goods and the physical environment play a part in the augmented reality world.
    Physical hyperlinks [...] are a recent major AR advance. PHs are machine-readable identifiers (1D and 2D barcode, RFID tag, image, sound, fingerprintts ) that can be resolved by a cell phone camera. A high-capacity (4,300 character) square 2D barcode called the QR ("Quick Response") code is now proliferating in Japan, with QR code readers preinstalled on all new 3G cellphones. [That is total nonsense, because a QR scanner is not an AR device in particular and the foundational concept of QR is not related to the foundational concept of AR in general, because virtual information is not overlaid on the physical world.]
    [...]
    Mobile wearable screens are to some degree virtual or mirror worlds, as they command all of the user's attention, at least for a glance. But they are also AR, as context-sensitive information is overlaid on them as they move through the physical world. [That is total nonsense, because]
    Another promising AR approach is an audio interface, with voice- or context-driven information delivered via earpiece (e.g., the ability to ask your search engine anything, and have an answer whispered into your ear, contextualized to your physical location). Wearable audio AR may require a more robust Conversational Interface before it reaches mass adoption however.
    [...]
    [Image caption:] Steve Jobs demos the iPhone. [We do not know why that was shown, because as long as it is not viewed as a handheld version of our Ontoscope (Os), that mobile device is totally unrelated to our Ontoverse (Ov).]

    What is life like in this scenario?
    The augmented reality scenario offers a world in which every item within view has a potential information shadow, a history and presence accessible via standard interfaces. Most items that can change state (be turned on or off, change appearance, etc.) can be controlled via wireless networking, and many objects that today would be "dumb" matter will, in the augmented reality scenario, be interactive and to a degree, controllable. To the AR generation, such properties will be like electricity to children of the 20th century: essentially universal, expected, and conspicuous only in their absence.
    Whoever delivers the first useful and scalable AR operating system and standards, perhaps via the cell phone platform, may become a central player in this future. As virtual data proliferate, information overload will be a common problem. The best of these will regulate human use of the system, respecting natural work, rest, and recreation cycles. In the near-term, AR devices may employ today's collaborative filters, which self-organize to advance one's interests and values. [...] [And once again: Bingo!!!]
    In the longer-term future, different people may have very different experiences of the same physical location. In extreme cases, one could use AR to hide images (such as signs, video displays, even other people) considered distracting or offensive. In a new form of self-obsession, isolation, and addiction, some might choose see only "Potemkin Villages," an information façade catering to their pre-existing biases and desires, and obscuring unpleasant reality. [...]

    Lifelogging (Intimate/Augmentation)
    In lifelogging, augmentation technologies record and report the intimate states and life histories of objects and users, in support of object- and self-memory, observation, communication, and behavior modeling. Object Lifelogs ("spimes," "blogjects," etc.) maintain a narrative of use, environment and condition for physical objects. User Lifelogs, ("life-caching," "documented lives," etc.) allow people to make similar recordings of their own lives. Object lifelogs overlap with the AR scenario, and both rely on AR information networks and ubiquitous sensors.

    Issues and Technologies
    Lifelogging is the capture, storage and distribution of everyday experiences and information for objects and people. This practice can serve as a way of providing useful historical or current status information, sharing unusual moments with others, for art and selfexpression, and increasingly, as a kind of "backup memory," guaranteeing that what a person sees and hears will remain available for later examination, as desired [...].
    Lifelogging emerges from accelerating technological trends in connectivity, bandwidth, storage capacity, sensor accuracy, miniaturization, and affordability.
    [...]
    The primary technological hurdle for the mature lifelogging scenario isn't the hardware, but the software: how does one tag, index, search, and summarize the terabytes of rich media archives of one's own life? [...]
    Beyond the near-term youth market, truly powerful user lifelogs seem unlikely to emerge until we have intelligent autocaptioning and autosummarizing systems, and a functional Conversational Interface (post 2016?), allowing voice-driven search on a wearable system through one's archive of past experiences (e.g., "show me that conversation last Summer when I was discussing abc with xyz.").

    What is life like in this scenario?
    [...]
    For lifelogging adopters, retention of past experiences will become functionally perfect, but recall and analysis of those experiences will only be as good as the web-based indexing and search software, which will constantly improve itself over the lifespan of the user. [...]
    [...] As long as the reputation network focuses on products and services, the group ratings will differ little from today's collaborative product recommendation systems. [...]
    A leading technological trend over this time period will be the increasing ability of lifelogging systems to make meaningful connections between disparate "memories," both individual and collective. In its fullest expression, such technology may become not simply a backup memory, but a backup sub-conscious, offering powerful cognitive augmentation and advice by past example. Viewed from the biggest picture, when coupled with ongoing work on the development of artificial general intelligence, lifelogging becomes one of several valuable pathways to a greater integration of human and machine "minds."
    Systems advanced enough to recognize objects, symbols and individual faces, visual AI tasks that many experts expect to be accurate enough for general use in ten to twenty years, will offer powerful new abilities not just to society but also to individuals. [...]
    [...]
    A leading technological trend over this time period will be the increasing ability of lifelogging systems to make meaningful connections between disparate "memories," both individual and collective. In its fullest expression, such technology may become not simply a backup memory, but a backup sub-conscious, offering powerful cognitive augmentation and advice by past example. Viewed from the biggest picture, when coupled with ongoing work on the development of artificial general intelligence, lifelogging becomes one of several valuable pathways to a greater integration of human and machine "minds." [And once again: Bingo!!!]
    [...]

    How These Combine
    The Metaverse contains elements of all four scenarios. At the same time, their technologies broadly overlap, as in the use of a mirror world map inside a virtual world, or a heads-up display AR system or object or user lifelog inside a mirror or virtual world. There are also more general ways the scenarios overlap.
    One link between the virtual worlds and mirror worlds scenarios is the refinement of digital models of environments, and the sense of immersion that results from good models. At present, virtual worlds for games, education, or socializing have rudimentary physics models, and little if any emergent or evolved phenomena - they're scripted, static, or entirely dependent upon user creation. Conversely, today's best mirror worlds have little sense of place or immersion, limited real-time shared content (where the actions of one user changes what other users see), and restrictions on what users can do within the environment. Improvements in either version of simulated worlds will come from lessons learned by examining the alternative.
    A link between the mirror worlds and augmented reality scenarios is the proliferation of sensors, networked devices, and intelligent materials. Both scenarios are heavily dependent upon the deployment of a multitude of systems able to monitor and influence properties of the physical world - the primary difference is the interface used to access this data. The two scenarios overlap yet have their unique strengths, with mirror worlds effective as tools of large-system monitoring and control, and augmented reality systems effective as mediators of personal interaction and point control.
    A link between the augmented reality and lifelogging scenarios is the development of a sophisticated interface for experiencing an enhanced awareness of one's physical and social environment, and sufficient network capacity to support full-time personal use. As described in the scenarios, the most effective AR and user lifelogging systems are likely to be unobtrusive wearable devices, which hand off most of their computation-intensive tasks to the network. Again, an augmented reality future will have some elements of lifelogging, and vice-versa, because the tools for one are enablers for the other.
    A link between the lifelogging and the virtual worlds scenarios is the emergence of a consistent digital identity allowing for seamless interaction between in-person and virtual representations of other people. This requires the development of an infrastructure that is open across multiple platforms, secure against spoofing, and able to recognize that you are you, regardless of how or where you're connecting. Advanced identity, trust and reputation may be slowest to emerge in virtual space, where part of the allure is to recreate oneself outside of one's social history. But the growing public transparency that will accompany advances in the other three scenarios is likely to impact virtual worlds as well, though perhaps to a lesser degree.

    Cross-Scenario Issues
    [...]

    Social Benefits and Challenges
    Relationships and Identity [And once again: Bingo!!!]
    [...]
    Filters, metadata, tags and search systems may be the most important infrastructure technology for the Metaverse. [And once again: Bingo!!!]
    Both augmented reality and mirror worlds offer context-aware versions of [a provider of a search engine service, a mirror world of the type digital earth, and other online services] or [an online encyclopedia] available simply at a glance, while lifelogging and virtual worlds, being more intrinsically personal, offer tools for a more detailed understanding of one's own life and relationships.
    [...]

    Business Benefits and Challenges
    Information Shadows
    Increasingly, businesses talk about the "information shadow" of the products and services they provide: the records of contacts, sources, deliveries, versions and so on that offer a complete history of a business offering. In coming years, the richness of information shadows in virtual space promises even the smallest Metaverse-using retailer the current logistics power of a [multinational retail corporation], the analysis power of a [multinational professional services company, specialized in Information Technology (IT) services and consulting], and the research power of a [multinational technology corporation, specialized in Information and Communications Technology (ICT)].
    [...]
    [...] the information shadows about people will make that task simpler, and herald a whole new level of consumer behavior modeling and predictive marketing. [...]

    [...]
    [Image caption:] Mixed-reality in [a Virtual Environment (VE)] [That is nonsense, because the image shows a classroom with a computer monitor, which shows a virtual classroom as 3D model with the telepresence of the teacher and the real projection on the whiteboard arranged therein. That is neither Augmented Reality (AR) nor Virtual Reality (VR) and therefore not Mixed Reality (MR).]
    [...]

    Big Questions
    Privacy and Control
    In many respects, the biggest question about the emergence of the Metaverse concerns privacy.
    [...]

    [...]
    The dark horse scenario is mirror worlds. Although it seems the least flashy of the four, as it continues to develop it might remain the most important to existing organizations even in the longer term, as a tool for learning about and an interface for competitively managing the physical world. While the underlying technologies (supercomputing, simulations, virtual Earth software, sensors, etc.) are all currently available in rudimentary form, the particular combination is ambitious in scope, and the largest professional community, the GIS community, is currently behind the development of this scenario.
    No discussion of social integration and acceptance of the Metaverse would be complete without considering the mass collaborations now beginning to occur on our current "Web 2.0" version of the Participatory Web. [...]
    [...] even in these early days the Metaverse offers unique new ways to form social groups, to model our environment (both physical and abstract), to test out possibilities and explore our options [...].

    Technological Viability
    [...]
    The software aspects of the lifelogging world are a major challenge. Developing the tagging, indexing and search software necessary for a widely-usable user lifelogging system - including systems for recognizing faces and locations in images, correlating ambiguous connections for searches, and making it all accessible for non-technical users - is a sufficiently-hard problem that most MVR participants expected only rudimentary versions of these technologies during the next decade.
    Similarly, the mirror worlds and augmented reality scenarios depend upon a functional array of sensor technologies distributed widely and densely enough to provide both useful details and meaningful context. Power sources, networking protocols, and universal access vs. proprietary control remain unanswered questions.
    [...]

    The Metaverse Scenario
    Despite many open questions, it's clear that the technologies of the Metaverse are likely to change how we live, work and play over the near-term, possibly in transformative ways in the longer-term. Improving foresight in this space is both a wise business strategy and a broad social good.
    [...]
    [...] Vision Statements [...]"

    Comment
    We already documented the obvious copyright infringement, because everything that differentiates from the originals of the Metaverse and the Mirror World was copied from the webpage of our Ontologic System (OS).
    It is a result of an exploration publicated after June 2007, but unsurprisingly it is basically about the related part of our Ontologic System.
    In fact, before those fraudulent authors started with their {results} we already answered with our Ontologic System, including our Evoos, the question: What happens when video games meet Web 2.0, and also the fields of

  • Intelligent VR with "Believable Social and Emotional Agents",
  • Collaborative VE,
  • Massively Multiuser VE (MMVE),
  • Mixed Reality (MR),
  • Immobile Robotic System (ImRS or Immobot),
  • Computational Intelligence (CI),
  • Emotional Intelligence (EI), or Emotive Computing (EmoC) and Affective Computing (AffC),
  • Cybernetics,
  • Cyber-Physical System (CPS),
  • Computational Intelligence (CI),
  • Cognitive Robotic System (CRS),
  • Semantic (World Wide) Web (SWWW),
  • Semantic Reality (SR or SemR),
  • and so on?

    Obviously, they could only steal what we had already publicated, but this was only the notion about the existence of our Caliber/Calibre and its relation to the

  • field of horology,
  • all these fields listed above, and also
  • fabric of reality,

    and therefore to space and time, and they were also unable to resolve our OS in detail, despite it is not that difficult, but just a matter of competence, as we have shown with our explanations and clarifications over the years again and again.
    Later, we updated the webpage Overview and added the webpage Caliber/Calibre, which demonstrates again what is the original and unique, and what is the plagiarism.

    We also note that the authors are only talking about sensors, but not actuators in relation to the mirror world concept, which we called unidirectional therefore.

    Also note, that

  • on the one hand the first iPhone was presented on the 9th of January 2007, which proves that the document was publicated after our presentation of the Evoos and OS at the end of October 2006, and
  • on the other hand the iPhone has become an Ontoscope since the Intelligent Personal Assistant (IPA) based on Artificial Neural Network (ANN) of Evoos and the Cognitive Agent that Learns and Organizes (CALO) based on Evoos (see the quoted document "PAL" above) was added to the operating system of the mobile devices.
    PAL and CALO are referenced in the section Intelligent/Cognitive Interface of the webpage Links to Software of the website of OntoLinux.

    Also note once again that the description of our OS is a minimalistic description of our synthesis and has to be resolved to get the whole scope of it. For example, the

  • basic properties include virtual environments and collaborative, and
  • Ontoscope component includes CoVE+DIVERSE - Collaborative Virtual Environments,

    which both reference also the Virtual Object System (VOS), which "is an infrastructure for object-oriented network communication, and building flexible, distributed object networks for a variety of purposes, but our primary application is multiuser collaborative virtual environments", including Virtual Reality Environments (VREs), but also Augmented Reality Environments (AREs), including mobile AR, and hence Mixed Reality Environments (MREs).
    But despite this confusion about the designations, one fact is undeniable: We publicated in 2006 and they copied and publicated our original and unqiue matter in 2007.

    In fact, the concept of mobility, migration, teleportation got a totally new meaning and realization with our Caliber/Calibre, OntoScope, and so on.

    At first, we did not understand why the field of quantified self or LifeLogging (LL) is considered as one of the four key components of the Metaverse future, and therefore we guessed that the quantified self of a non-human object respectively object lifelogging should suggest something, that is related to the so-called digital twin and the fields of Cybernetics and Cyber-Physical System (CPS).
    We also guess that the authors saw that our Ontologic File System (OntoFS) is log-based, but we do know that they have not seen the

  • foundation of the Peer-to-Peer (P2P) Virtual Machine (VM) (P2P VM) Askemos with the smart contract transaction protocol already included in our Evoos and
  • blockchain technique included in our OntoFS and our Distributed Ledger Technology (DLT) based on it and our integrating Ontologic System Archticture (OSA)

    all included in our OS.

    Indeed, after the comparison of the Social Interaction Framework (SIF) publicated in 1999 with the Social Interaction Framework for Virtual Worlds (SIF-VW) publicated in 2000 and our Evoos publicated in 1999 (see the comment to the document titled "SIF-VW: An Integrated System Architecture for Agents and Users in Virtual Worlds" (translated into English) and quoted above), we found out that the version SIF 1999 was extended with the

  • Human-In-The-Loop (HITL) and the User-In-The-Loop (UITL) approaches and a related feedback loop, and also
  • wrong and misleading designation of a 3D Virtual World (VW) or Virtual Envrionment (VE) as "this kind of augmented reality",

    which are related to the fields of Wearable Computing (WC or WearC) and Humanistic Computing (HumanC), which again are related to LifeLogging (LL) and Mediated Reality (MedR), including Augmented Reality (AR), and already integrated in the creation of in our Evoos before by what was shown to be our fusion of realities.

    What makes us wonder is that it does not mention the fields of IABS, CPS, CAS, Semantic (World Wide) Web (SWWW), and so on, even not the SWWW in relation to lifelogging, where it makes a lot of sense.

    The statements about the technological viability are just plain nonsense and also shows the technological incompetence of the authors besides their social incompetence, because everything is existing and is integrated by our OS.

    Howsoever, we got the proof that other entities have understood our OS with its Ontoverse (Ov) to a significantly large extent, which for sure is the single, unified entity called Ontoverse (Ov).

    "The best way to predict the future is to invent it." Obviously, we have a 100% precison in predicting the future, which proves once again that C.S. has created and the creation respectively expression of idea of our OS is copyrighted.

    intimate and inward-focused, and external and outward-focused is a major concept in relation to our Evoos and OS as is the case with reflective and self-reconfiguring, and so on.
    Our Caliber/Calibre drives the inside out approach to the maximum, which is discussed in philosophy as simulation argument and living in a simulation, and is also about Digital Physics and Rechnender Raum==Computing Space, hypercomputing, Theory of Everything (ToE), and quantum computing.

    Finally, our claims should be regarded as being correct.
    We have already said to close the case two times and now we say it a third time, because even if an unnoticed or overlooked fact flipped the overall legal construct again, then we also noticed once again that if it flips, then it also flops the overall legal situation due to another fact and only shifts the white, yellow, or red line. And there are not much facts anymore, that could flip the overall legal construct or situation.
    For example, {no Plan-Do-Check-Act (PDCA), QM loops, simulation, etc.} Immobile Robotic System (ImRS or Immobot) and {not bidirectional, no feedback, Plan-Do-Check-Act (PDCA), QM loops, etc.(!?)} Sensor Net (SN) flipped in relation to CPS, but proved MBAS or ImRS, SN, and CPS in Evoos.
    For example, CoMMA-COGs stole respectively flipped a little in relation to CAS, but proved respectively flopped a lot in relation to MAS, AR, IVE, CVE, MMVE, ROC, AC, mSOA, and Agent Chameleons in Evoos, and also once again MBAS or ImRS, and CPS in Evoos.
    For example, AIBO stole respectively flipped in relation to Mobile Robotic System (MRS) and Intelligent Robotic System (IRS), but proved in relation to MRS and IRS in Evoos.
    In addition, our Ontologic System (OS) with its metaverse Ontoverse (Ov) includes our

  • foundation of microService-Oriented Architecture (mSOA),
  • foundation of Service-Defined Network (SDN), Network Function Virtualization (NFV), and Virtualized Network Function (VNF), as well as Cloud Space-native Network Function (SNF),
  • foundation of all modern operating systems, including
    • iOS,
    • Linux Android,
    • Windows,
    • etc.,
  • polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS) with one- or two-hop lookup performance in many cases, O(1) (constant time) hop lookup performance in most cases, and up to O(logk n) (polylogarithmic time) hop lookup performance in cases of hotspot regions with churn-intensive workloads, including
    • Distributed operating system (Dos),
    • Peer-to-Peer Computing (P2PC),
    • SuperComputing (SC or SupC), including
      • High-Throughput Computing (HTC),
      • High Performance Computing (HPC or HPerC),
      • High Performance Communications (HPC or HPCom),
      • High Productivity Computing (HPC or HProC),
      • Cluster Computing (CC or ClusterC),
      • Distributed SuperComputing (DSC or DSupC), including
        • Grid Computing (GC), and
        • Wide Area Network (WAN) SuperComputing (WANSC) or Interconnected SuperComputing (ISC),
      • Many-Task Computing (MTC),
    • Distributed Artificial Intelligence (DAI), Multi-Agent and Cooperative Computing (MACC), and Modeling Autonomous Agents in a Multi-Agent World (MAAMAW),
    • Scalable Distributed Tuplespace (SDT),
    • Scalable Content-Addressable Network (SCAN),
    • Ultra-Large scale, Massively Distributed System (ULMDS) or Ultra Large Distributed System (ULDS), including
      • Ultra-Large scale, Massively Multiuser Virtual Environment (ULMMVE), including
        • Ultra-Large scale, Massively Multiplayer Online Game (ULMMOG),
      • Resilient Distributed System (RDS) with
        • Byzantine resilience protocols, and
        • INtrusion-Tolerant Replication (INTR),
    • Space-Based technologies (SBx),
    • Service-Oriented technologies (SOx),
  • Distributed Ledger Technology (DLT), including
    • foundation of Bitcoin, Ethereum, and Co.,
  • New Reality (NR), including
    • eXtended Mixed Reality (XMR) or eXtended Reality (XR),
  • Web 3.0, Web 4.0, Web 5.0, including
    • Web3,
  • Ubiquitous Computing 2.0 and Internet of Things 2.0, including
    • Industry 5.0 (Industry 4.0 and Ontoverse (Ov)), including
      • Industrial Internet of Things (IIoT) and
      • Industry 4.0,
  • etc., etc., etc., including
    • etc., etc., etc., including
      • etc., etc., etc.,

    and their overall integration by our Ontologic System Architecture (OSA) and Ontologic System Components (OSC).

    Our goal was not to show what was going on behind the curtain at that time in the first place, but to show the foundations, differences, and when and where we left the pack behind us finally.

    See also the

  • Clarification of the 18th of July 2021, which is about the fields of UbiC and IoT, CPS, and so on, and
  • Clarification of the 25th of December 2021, which is about the various Web x.0, which again put all these matters together.

    Last but not least, we note that our creations of the Evoos and the OS

  • began the discussions,
  • are used as sources of inspiration and blueprints,
  • are followed by the mainstream, and
  • are something totally new, which were absolutely unforeseeable and unexpected by an expert in the related fields respectively a Person of Ordinary Skill In The Art (POSITA) in the mid of December 1999 and in the end of October 2006,

    which proves their status as original and unique, copyrighted works of art.

    This provide us a relatively good legal ground to begin the last negotiations about details and the invitations in relation to our Society for Ontological Performance and Reproduction (SOPR) and our other Societies.

    Needless to say, we have priced in the costs and damages of all those frauds and serious criminal actions in the License Model (LM) of our SOPR and organized the Articles of Association (AoA) and the Terms of Services (ToS) of our SOPR accordingly to keep certain parts exclusive as more they are original and unique, and as more clear their status as copyrighted is.

    OntoLab, The Lab of Visions, or better said The Lab of Proven Visions and Fulfilled Promises.
    All or nothing at all.
    Welcome to the Ontoverse (Ov).


    20.February.2022

    08:18 UTC+1
    Ontonics Further steps

    What a development. We concluded that a general problem can be solved much more easily than we did in the past, which is so much more easier that we can only say the trees, the forest, the blinds.

    We developed several basic variants of the additional solution.


    22.February.2022

    Ontonics Further steps

    Because of a certain geopolitical development, which already damages the goals and even threatens the integrities of C.S. and our corporation, including our Societies, we have rearranged and revised some business plans.

    All enterprise endeavours in specific industrial sectors, that were planned to a significant extend to

  • realize the soft landings of certain territories in the course of the developing transition to our New World Order (NWO), have been canceled without replacement or substitution, and
  • realize subsidiaries in certain territories or together with entities of said territories in the course of the conduction of common business, have been dispersed to our other developing subsidiaries being established in other territories on the continents Asia and America.

    While rearranging our business plans, we also concluded suddenly that we can rearrange our business plan in relation to all enterprise endeavours in another industrial sector as well, which we will not disclose for increasing the moment of surprise. But we would like to share the informations that

  • in particular this rearrangement will even exploit deficits ofcertain territories, that they cannot circumvent and we are not willing to manage in a different way for their soft landings anymore, and
  • in general these enterprises have progressed significantly far more rapidly in the last days than initially planned.


    28.February.2022

    Original vs. Inspiration

    Bladeless wind energy generator

    Dario Nunez Ameni Windstalk
    Dario Nunez Ameni Windstalk
    Vortex Bladeless Wind ForestVortex Bladeless Wind Forest
    Los nuevos generadores eolicos vortex

    © Dario Nunez Ameni, Vortex Bladeless, and Alex R. Fisher and Ramon Curto

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer