Home → News 2022 May
 
 
News 2022 May
   
 

02.May.2022

00:19 UTC+2
BaFin hat vor Bitcoin, Ether und Co. zu warnen

Bei den illegalen Kryptowährungen, wie zum Beispiel Bitcoin, Ether und Co., handelt es sich um nichts anderes als eine virtuelle und getarnte Form, die unter anderem

  • dem Betrieb von einem illegalen Schneeballsystem oder Pyramidensystem,
  • der Herstellung und Verbreitung von Falschgeld,
  • dem Anlagebetrug,
  • der Steuerhinterziehung und
  • der Finanzierung von Terrorismus

    dient.

    In der Tat handelt es sich um ein illegalles Schneeballsystem oder Pyramidensystem, da zum Beispiel

  • nur die obersten 2% der Kontoadressen 95% des Bitcoin-Volumens von mehr als 800 Milliarden Dollar in Dezember 2021 besitzen, also die ersten Herausgeber und NutzerInnen dieser illegalen Kryptowährung und
  • die Öffentlichkeit über sämtliche anderen Fakten auch getäuscht wird, unter anderem über
    • eine angebliche Gemeinnützigkeit, die aber in Wirklichkeit nur auf der Möglichkeit der kostengünstigen elektronischen Überweisung beruht und
    • eine angeblichen Eignung als sichere Wertanlage, obwohl dies immer wieder durch den sehr volatilen Wechselkurs zu realen Währungen wiederlegt wird

    und diese vorsätzliche Täuschung selbst durch alteingesessene Banken bewerkstelligt wird.

    Der Verdachtsfall ist somit mehr als begründet und gegeben, weshalb auch die Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) der B.R.Deutschland in ihrer gesetzlichen Pflicht steht die Öffentlichkeit vor der Nutzung von illegalen Kryptowährungen, wie zum Beispiel Bitcoin, Ether und Co., insbesondere als Kapitalanlage(produkt) eindringlichst zu warnen anstatt den Betrug mit der Vergabe von sogenannten Kryptoverwahrlizenz noch zusätzlich zu fördern.
    Letztere ist ohne eine Erlaubnis und Lizenz von unserer Gesellschaft für Ontologische Aufführung und Reproduktion (GOAR)==Society for Ontological Performance and Reproduction (SOPR) im Zusammenhang mit illegalen digitalen und virtuellen Währungen und anderem Besitz sowieso aus mehreren bekannten Gründen nutzlos und wertlos.

    00:54, 05:24, and 07:27 UTC+2
    SOPR #33m

    *** Work in progress - LM, KG, and IPA not ready ***
    Topics:
    In this issue, we summarize messages, notes, comments, and other publications related to the following topics:

  • Legal matter [Threat to integrities]
  • Legal matter [Ownership regulation]
  • Legal matter [Scope of license]
  • Legal matter [Open source license]
  • Legal matter [Raw signals and data]
  • Legal matter [Sanction]
  • Legal matter [Ontologic Financial System (OFinS)]
  • Legal matter [Exploitation of reports]
  • Legal matter [Active Components]
  • Infrastructure [Satellite] or Superstructure
  • Consent Management System (CMS or ConsMS)
  • Ontologic Financial System (OFinS)
  • Ontologic Financial System (OFinS) [Digital money supply]
  • Ontologic Bank Financial Information and Communications (OBFIC or OntoBankFinIC)
  • Social and Societal System (SoSoS or S³) [Board of directors]
  • Further steps [Inviting letter]
  • Further steps [Use of royalties]

    Legal matter [Threat to integrities]
    In general, the denial of the existence constitutes a very serious attack on and threat for the integrity of an entity, specifically an individual.

    In particular, the denial of the existence of any original and unique ArtWork (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. constitutes a very serious attack on and threat of the integrities of C.S. and our corporation as done in the last months once again by some companies.
    Particularly aggravating the already not very comfortable and pleasant legal situation for them are the facts, that they have done it collaboratively as members of a national clique and together with other entities, such as the press and members of other national cliques.

    Our Society for Ontological Performance and Reproduction (SOPR) gives all entities concerned worldwide a deadline of 2 months to correct all of their statements regarding all of the AWs and further IPs included in the oeuvre of C.S. in a truly acceptable way in public before withdrawing the non-labelling option or blacklisting them or both. :)

    Legal matter [Ownership regulation]
    The ownership regulation of our Society for Ontological Performance and Reproduction (SOPR) means more than a aufteilung of the shares of a business.

    In case that our SOPR has to impose an ownership regulation in accordance with the Articles of Association (AoA) and the Terms of Services (ToS) of our SOPR, the

  • out-of-court agreement and
  • non-labelling option

    are abolished as well as a simple consequence and our SOPR is free to decide in relation to both.
    In addition,

  • blacklisting respectively market access

    is another option of our SOPR to protect and enforce all of the rights and properties (e.g. copyright, raw signals and data, online advertisement estate, etc.) of C.S. and our corporation (see also the section Legal matter [Scope of license] of today).

    Legal matter [Scope of license]
    Prosecution, or limited scope of license,
    To be, or not to be in the market, that is the question:
    The latter works in all cases.

    In (too) many cases the demand for federal, national, and international

  • sovereignty, and
  • safety and security, and also
  • standardization, and
  • other social, societal, political, legal, technological, environmental, and economical requirements

    of governments and their cliques as well as their international partners has turned out to be only a pretext to

  • interfere more in the rights and get more control over the properties, and
  • exploit the goodwills, undermine the intentions, damage the goals, and even threaten the integrities

    of C.S. and our corporation.
    It has also become apparent that countries are even sabotaging the activities of our SOPR and eventually the protection, support, and demand for

  • freedom of choice, innovation, and competition, and also
  • interoperability and standardization, as well as
  • harmony, continuity, stability, and prosperity

    pro bono publico==for the public good.
    In the following we give some prominent examples:

  • In the P.R.China we still observe official meddling in private business, credit-fueled economy, and subsidization of infrastructure, which have risen to unsustainable and unacceptable levels (once again).
  • In the European Union (EU) one action and legal rule after the other is drafted and introduced, which could have been done already 20 years ago, but is a process that has only began with our activities, specifically the establishment of our SOPR and its actions. That the largest companies of the Information and Communication Technology (ICT) industrial sectors are also affected is not the main goal, but only a side effect, because over the time one can see that the focus of the European Commission (EC) is laid on our OS. But such a focus on an individual entity is not correct.
  • In India a certain cliquism is more than obvious.
  • In the U.S.America a certain understanding about the world and the global economy as well as non-action even in cases, where laws demand proper action, are unmissable.
  • In the other member states of the so-called Group of Twenty (G20) and most of the other places more or less the same is happening.

    The subsidization of infrastructure, specifically the so-called Big Green Deal and the New Infrastructure both based on our OS and the exclusive infrastructures of our SOPR and our other Socities created by C.S. and our corporation and all mainly created by C.S. and our corporation, has risen to unsustainable and unacceptable levels (once again) in the course of another orgy of debts.
    We also observe a competition of social and political systems despite we said that there is no race for control over the original and unique ArtWorks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. and it will not happen on our expense (once again).

    In addition, we have already complained multiple times that our demand to stop mimicking of C.S. and our corporation has been ignored, despite it is a requirement of the so-called societal comprise, which constitutes the legal foundation for opening our work of art titled Ontologic System, and allowing and licensing the performance and reproduction of certain parts of our Ontologic System (OS).
    In this context, we also made crystal clear that we will do not open our OS without any reason and cause only to enable other entities to compete against us (only) on the basis of our AWs and IPs, specifically those entities who have nothing else in mind than to exploit the goodwills, undermine the intentions, damage the goals, and even threaten the integrities of C.S. and our corporation alone or in collaboration with other entities to name some of their true illegal and even serious criminal motivations and activities.

    We tried different approaches and measures to protect and enforce all of the rights and properties of C.S. and our corporation, but eventually we had to conclude that the

  • exertion of pressure from the inside does not work (satisfactorily) or even is not possible, and
  • ownership regulation does not serve our goals and interests, because it is no means of encouragement to show the required respect and behavior change.

    Therefore, the motivation must come from the outside, specifically through the limitation of the access to the most lucrative markets for technologies, goods, and services.

    In this context and in the course of another revision of the License Model (LM) of our SOPR, we already discussed the possibility to allow and license the performance and reproduction of certain parts of our OS in general and we discuss the matter in particular in relation to

  • educational institutions,
  • scientifical institutions,
  • research and development facilities,
  • federal agencies, and
  • State-Owned Entrerprises (SOEs), including
    • virtually SOEs with subsidized, federal credit-fueled, or otherwise supported operational livelihood,
    • partially SOEs with up to 5% ownership by a single state and up to 15% ownership by multiple states, and without veto voting right, and
    • purely SOEs with more than 50% ownership

    only in the scope of a single sovereign territory's state, province, direct-administrated municipality, city state, special economic zone, and so on, like for example

  • Guangdong, Shenzhen, and Hong Kong in the P.R.China,
  • Texas, Florida, and California in the U.S.America,
  • B.R.Deutschland, French Republic, and Austria in the European Union (EU),

  • which has an own government and jurisdiction, and
  • where an effected entity is located.

    In case of a subsidiary, joint venture, multiple locations, and so on we suggest or accept only one location and jurisdiction.

    An affected entity outside the scope of the jurisdiction of a sovereign territory's state has the option to either

  • cease and desist,
  • sell its corporation to us for a reasonable price,
  • take the possibility to negotiate an individual arrangement to get an allowance and license outside the scope of the location and jurisdiction of a sovereign territory's state, but this is not granted by default and we would not count on that in advance, or
  • apply as a Main Contractor (MC) of our SOPR.

    For sure, triple damage compensations and other payments are due because of the

  • national and national and international laws, regulations, and acts, as well as agreements, conventions, and charters, and
  • regulations of the Articles of Association (AoA) and the Terms of Services (ToS) of our SOPR remain unchanged and as discussed.

    We will not make multiple AoAs and ToSs for every single location and jurisdiction, but one AoA and one ToS for all.

    Obviously, this regulation does not restrict, distort, or disturb the competition of enterprises that are (truly) privately owned, funded, or operated to the detriment of competitors and customers. Quite contrary,

  • on the one hand governments do so with their various institutions, facilities, agencies, and SOEs in their domestic markets, but also in international markets, and
  • on the other hand non-competitive entities have to free the market for better ones.

    In this relation, we would like to recall another time that we do not have to open our OS with its Ontologic System Architecture (OSA) at all, specifically if the

  • OS is viewed as a cybernetic self-portrait, self-augmentation, and self-extension, because eventually slavery is prohibited,
  • competition only emerged on the basis of illegal and even criminal actions such as infringements and thefts in relation to the rights and properties (e.g. copyright, raw signals and data, online advertisement estate, etc.) of C.S. and our corporation, or
  • entities have only in mind to exploit the goodwills, undermine the intentions, damage the goals, and even threaten the integrities of C.S. and our corporation.

    The rights and properties of C.S. and our corporation have to be protected in the first place and everything otherwise is only distorting the facts and driving the legal system ad absurdum.

    Anyone, who does not like this revision, should look in the mirror or have a very serious word with the responsible government.

    Legal matter [Open source license]
    Providing HardWare (HW) or SoftWare (SW), which constitutes a performance and reproduction of certain parts of our Ontologic System (OS), as open source, but not free and open source or even free source, requires

  • 1 license from the provider of the open source HW or SW and
  • 1 license from our Society for Ontological Performance and Reproduction (SOPR) for the OS.

    Providing HW and SW, which is based on our OS, as proprietary source, but not open source, requires

  • 1 license from the provider of the proprietary source HW or SW, and a usual note about protected ArtWork (AW) and further Intellectual Property (IP) included in said HW or SW only.

    As long as the end user or customer is not required to pay royalties directly to our SOPR, the non-labelling option is supported in this way.

    Legal matter [Raw signals and data]
    We have heard that a manufacturer of Ontoscope on Wheels and a subsidiary of a manufacturer of Ontoscope Components (OsC) have talked or agreed to jointly collect so-called swarm data of the fleets of Ontoscope on Wheels of the vehicle manufacturer.

    Swarm raw signals and data in the

  • legal scope of our Ontologic System (OS),
  • domain of our New Reality (NR) respectively
  • sovereign space of our Ontoverse (Ov), also known as our digital state OntoLand (OL)

    are also properties of C.S. and our corporation, and therefore SOPR signals and data.
    According to the Articles of Association (AoA) and the Terms of Services (ToS) of our Society for Ontological Performance and Reproduction (SOPR) the unrestricted access to raw signals and data is manadatory.

    Legal matter [Sanction]
    We would like to recall that our original and unique masterpiece titled Ontologic System, also abbreviated as OS, and created by C.S. is not covered by any sanction or restriction, which is a requirement for opening our OS, and allowing and licensing the performance and reproduction of certain parts of our OS.
    Merely certain technologies, goods, and services for realizing our OS would be covered by a sanction or a restriction.

    Legal matter [Ontologic Financial System (OFinS)
    We will not tolerate any copyright infringement in relation to our original and unique, copyrighted, and prohibited for fair use work of art titled Ontologic System, created by C.S., and exclusively managed and exploited by our Society for Ontological Performance and Reproduction (SOPR) with the consent and on the behalf of C.S..
    This includes our

  • Trust Management System (TMS or TrustMS),
  • IDentity and Access Management System (IDAMS), and
  • Consent Management System (CMS or ConsMS), and also
  • Ontologic Financial System (OFinS),
    • Ontologic Bank (OntoBank),
      • Ontologic Payment System (OPS or OntoPay),
      • Ontologic Payment Processing System (OPPS or OntoPayPro),
      • Ontologic Exchange (OEx or OntoEx),
      • Ontologic Bank Financial Information and Communications (OBFIC or OntoBankFinIC),
  • Ontologic Trustworthy System (OTwS or OTrustS),
  • and so on

    of the infrastructures of our SOPR and our other Societies.
    See the issue SOPR #32x or #33y of the 14th of July 2021.

    In this relation, we would like to recall some general recommendations and requirements:

  • Do not obstruct it.
  • Do not copy it.
  • Do not edit it respectively describe it in other words.
  • Do not bend and alter the copyright or risk patent right.
  • Do not think about any unwanted actions.

    Legal matter [Exploitation of reports]
    We would like to

  • make clear that exploiting criminal media reports will result in blacklisting and
  • recall that we have warned again and again.

    Legal matter [Active Components]
    We would like to recall, that all

  • Active Components™,
  • System Vehicles™, including System Automobile™ and System Truck™,
  • Computing and Multimedia,
  • and so on

    of our business unit Style of Speed™ are included in the

  • original and unique Artworks (AWs) and further Intellectual Properties (IPs) included in the oeuvre of C.S. and
  • scope of opened HardWare (HW) and SoftWare (SW) of our Ontologic System®™© (OS), allowed and licensed for performance and reproduction,

    and therefore a performance and reproduction by members, and artwork and technology licensing partners of our Society for Ontological Performance and Reproduction (SOPR) is not considered as mimicking of C.S. and our corporation.

    We have separated the newly created AWs and further IPs of C.S. in new business units and by explicit description from these licensable AWs and further IPs.

    Infrastructure [Satellite] or Superstructure
    The European Commission (EC) of the European Union (EU) announced to finance and establish an European Internet satellite constellation, because as a sovereign territory it wants an infrastructure that is independent from other sovereign territories.
    Somehow we do not know (not really) why the EC comes up once again with something after the EC has seen it on our website and not many years earlier.
    And it looks a lot like the usual lobbyism and clientele politics.

    Howsoever,

  • in general the space is no sovereign territory of anybody and
  • in particular the regulations of the Articles of Association (AoA) and the Terms of Services (ToS) of our Society for Ontological Performance and Reproduction (SOPR) are effective.

    Consent Management System (CMS or ConsMS)
    As it is well known in the industries worldwide, our SOPR has the opinion that the choice of which

  • Personally Identifiable Information (PII) to share or not, and
  • Personal Management Algorithm (PMA) to use or not

    should be decided by the user.

    But we also would like to recall that certain

  • raw signals and data and
  • PIIs

    have to be shared by all members of our SOPR in a transparent, anonymous, and legal way, if a

  • common sense of,
  • justified interest of, and
  • reasonable benefit for

    the public respectively the members of our SOPR exist (see also the section Social and Societal System (SoSoS or S³) [Board of directors]).

    Ontologic Financial System (OFinS) [Digital money supply]
    One of the last major sovereign territory and the last major economic zone, the U.S.America, has begun with the development of its national legal framework, which

  • on the one hand is required for participating in our Ontologic Financial System (OFinS) and
  • on the other hand is very long overdue since more than 6 years now.

    Specifically, the Department of Treasury, Federal Reserve System (Fed) respectively U.S. American Central Bank, Financial Stability Oversight Council, Department of Commerce, and other federal authorities of the U.S.America will now prepare the issueing of the United States Central Bank Digital Currency (USCBDC), als known as digital Dollar or electronic Dollar.
    (See also the section Legal matter [Ontologic Financial System (OFinS)].)

    In the same context but with the opposite intention, the Chief Executive Officer (CEO) of an illegal exchange for illegal cryptocurrencies and basically the rest of that crypto mob said that the demand to freezy accounts for the reason to enforce sanctions and prevent their circumvention would "fly in the face" of the reason why crypto exists.
    In this way the company confirms that illegal crypto is for bypassing all state controls.

    Ontologic Bank Financial Information and Communications (OBFIC)
    The mythical alternatives to the

  • Society for Worldwide Interbank Financial Telecommunications (SWIFT) and
  • Ontologic Bank Financial Information and Communications (OBFIC or OntoBankFinIC), so to say the Society for Worldwide Interbank Financial Telecommunications of the Next Generation (SWIFT NG) respectively the successor of SWIFT

    are only local systems and have no foreign exchange into the most important real currencies U.S. Dollar and Euro, and digital and virtual currencies OntoCoin and OntoTaler, as well as Quantum Coin©™ (Qoin©™), which are not so much managed and hence much more trustworthy in total contrast to other currencies.

    Therefore, the foundational problem remains, as others and we explained multiple times in the last months.
    In addition, we have copyright problems, which the

  • SWIFT can stop easily if its format is misused and
  • SOPR can stop if our Ontologic System (OS) with its Ontoverse (Ov) and Ontoscope (Os) is misused.

    Eventually, there are no alternatives to the

  • Society for Worldwide Interbank Financial Telecommunications (SWIFT) and
  • Ontologic Bank Financial Information and Communications (OBFIC or OntoBankFinIC)

    at all.

    Social and Societal System (SoSoS or S³) [Board of directors]
    With our original and unqiue Social and Societal System (SoSoS or S³) of our Ontoverse (Ov) we have created an uncomparable institution and benefit at a protocol level pro bono publico==for the public good worldwide.
    But because we are not interested in the management and moderation of the contents, discourses, and so on of its users respectively the members of our Society for Ontological Performance and Reproduction (SOPR) (see also the sections Consent Management System (CMS or ConsMS) of today and Infrastructure [Co²S and S³] or Social and Societal System (S³) of the issue #33l of the 31st of January 2022), we are already setting the board of directors of our S³ up since several months and specifically in the last days many potential candidates could be added.

    The members of the board of directors are voted by the users of our S³ for a term of for example 5 years.
    So far we have the following list of proposed members:

  • some persons with fixes for Facebook
  • representative of Electronic Frontier Foundation
  • representative of Mozilla Foundation
  • Frances Haugen, former Facebook product manager
  • Vijaya Gadde, Twitter Chief Legal Officer (CLO)
  • Leslie Miley, former Twitter engineering manager, who started its product safety and security team
  • Brianna Wu, video game developer and computer programmer, runner for Congress, Rebellion Political Action Committee (PAC)
  • Kirsten Martin, Notre Dame, Mendoza College of Business, Technology Ethics
  • (maybe) Joan Donovan, Harvard University, Shorenstein Center on Media, Politics and Public Policy
  • Jonathan Greenblatt, Anti-Defamation League
  • Andrew Bakaj, lawyer for Whistleblower Aid
  • Imran Ahmed, Center for Countering Digital Hate
  • Karen Kornbluh, German Marshall Fund, Digital Innovation and Democracy Initiative,
  • Adam Connor, Center for American Progress, Technology Policy, former Facebook employee
  • (maybe) Angelo Carusone, Media Matters for America
  • Nicole Gill, Accountable Tech
  • any other qualified person without relations to
    • governments,
    • media, including social media paltforms, and
    • commercial interest groups

    suggested by the users of our S³ and our SOPR.

    The overall management is provided by the Main Contractors (MCs) of our SOPR and our SOPR, and the ultimative supervision is provided by the supervisor, to keep the system operating.

    By the way:

  • There will be no illegal clone of our S³ or even a substitute for this mandatory environment or space of the exclusive infrastructures of our SOPR.

    Further steps [Inviting letter]
    The announced inviting letter of our Society for Ontological Performance and Reproduction (SOPR) applies for all entities concerned and has the following structure (see also the issue #33a of the 20th of August 2021):

  • short overview of the document
  • general description of the alleged infringements of the rights and properties of C.S. and our corporation, and other activities
  • short summary of Evoos
  • short summary of OS
  • argumentation about legal position
  • additional information about documentation of activities and accompanying actions
  • request for own in-house investigation
  • request to either
    • cease and desist, or
    • sign, pay, comply
  • out-of-court agreement
  • recognition and confirmation of copyright
  • full admission of guilt (archieved in joint safe for 20 years)
  • expression of official apology through suitable channel
  • payment of triple damage compensations
  • admission fee
  • outstanding royalties
  • payment options, including payment plan
  • license contract
  • establishment of joint ventures
  • regulation of ownership
  • link to the issues of our SOPR as legal framework respectively the Articles of Association (AoA) and the Terms of Services (ToS) with the Discount Model (DM), License Model (LM), and Main Contract Model (MCM)
  • additional formal information, if required.

    As already said, formal mistakes and details can be discussed and negotiated in a constructive way inbetween a reasonable scope.
    But the whole procedure comes down to the

  • payment of the triple damage compensations, admission fee, outstanding royalties, and royalties according to the 1 table of the DM, 2 tables of the LM, an initial deposit is always required, and
  • implementation of the MCM.

    The rest is more or less in the flow and will only mature and solidify without any surprise in the following months.

    The inviting letter will be sent by our legal team to the major artwork and technology licensing partners, including federal authorities and companies, and will be publicated on the related webpage of our Hightech Office Ontonics.

    We would like to recommend once again that the materials are read, reviewed, and examined by the management and the legal department together, so that there are no legal and technological misconceptions or uneccessary delays on their side.

    Other potential actions of us have been discussed already and should be no surprise.

    Further steps [Use of royalties]
    We estimated that 99.5+% of the royalties will be reinvested in the infrastructures, technologies, goods, and services of our Societies for the benefit of all members, and artwork and technology licensing partners of our SOPR.


    07.May.2022

    01:11 UTC+2
    Ontonics Blitz Fund I #27.4.15

    At first, we quote a report about the U.S.American Commerce Department and solar energy technologies: "318 solar projects in the US[America] had already been delayed or canceled, and several [Chief Executive Officers (]CEOs[) ...] expect more to follow.
    [...]
    This year was supposed to be a banner year for US[American] solar growth. [An i]ndependent energy research firm [...] estimated the US would add another 27 gigawatts of solar energy this year.
    Now - between the Commerce Department probe, the border seizures, the high cost of solar components and no new legislation in Congress to grease the wheels for more renewable energy - [the energy research firm] estimates the US might only add around 10 gigawatts in 2022.
    [...] the backslide threatens Biden's own climate goal to slash planet-warming emissions in half by 2030. The US[America] would have to install around 50 gigawatts of solar photovoltaic capacity each year from 2022 to 2030 to keep Biden's goal on-track [...]."

    Comment
    Our Superbolt #4 Electric Power (EP) will add 3 terawatts in the years 2022 to 2030 in the U.S.America, but also in the European Union, P.R.China, and India, and considerable amounts in South America, Africa, Canada, U.K., Australia, Japan, Middle america, and other regions.

    And when we say our EP, then we do not mean any bandwagon jumper or bad actor, because we also have the required batteries and even own the systems of the field of Cybernetics, and also Ubiquitous Computing and Internet of Things (IoT) for the smart grid and significant parts of the smart grid infrastructure with related facilities and services, and so on.
    Investors know where to put there money. Hint: It has to do with our mouth. :)

    There will be considerable discounts and further incentives for friendly entities.

    Style of Speed Further steps

    We are pleased to announce the coming of 2 new models of our 9EE model series, with the

  • 9EE Cabrio, and
  • 9EE Speedster

    The 9EE Cabrio features an

  • electric convertible soft top, and
  • tonneau cover as option, or
  • rigid removable hardtop as option.

    Roadster!!!

    The 9EE Speedster features a

  • lower windscreen, and
  • tonneau cover with a convertible soft top, or
  • rigid removable hardtop as option.

    Speedster!!!


    08.May.2022

    15:31 and 20:51 UTC+2
    Clarification

    *** Sketching Mode - Work in progress - more quotes, comments, better order and explanation ***
    When looking at the fields of

  • Natural Multimodal Processing (NMP),
  • Knowledge Graph (KG), and
  • Intelligent Personal Assistant (IPA)

    in relation to the infrastructures of our Society for Ontological Performance and Reproduction (SOPR) and our other Societies, we noticed some funny and serious things.

    Interestingly, we have observed and worked out many attempts to rescue scientific works respectively cure their deficits all the time in the year 1999 and the following years, often by applying cheap tricks to mislead the public.
    As we noted in the Clarification of the 13th of April 2022, we already found out that around the years 1998 and 1999 developers of frameworks and middleware in the field of Multi-Agent System (MAS) have no clue about the fields of operating system (os) and Distributed System (DS), including Peer-to-Peer (P2P) Computing System (P2PCS), as well as resilience, and so on, and began to reinvent already existing solutions of the latter fields around 1999.
    Some years later, we made the same observation in relation to developers in the fields of cybernetics, logics, and ontology, as well as systems based on them, including the Semantic (World Wide) Web (SWWW).

    In the course of this we noticed that it would be useful for a better understanding of KG to quote works about the fields of Dynamic Semantic Web (DSW), and Cybernetic Ontology and Web Semantics, which we already related to KG. As a side effect, it also provides us a better understanding of our Evoos and OS, and Linked Data (LD), and other related matter.

    Eventually, one can see how visonary and far-reaching our Evoos already was in 1999 and how important it is for everything that followed.
    In addition, it becomes now really interesting in relation to our Ontologics®, because everybody can see how visionary and ingenious our masterpiece OS already was in 2006 by integrating all in one.

    We continue the discussion of the various fields and related works in relation to

  • ontology,
  • cybernetics,
  • Model Theory (MT),
  • messaging,
  • Real-Time Computing (RTC),
  • Peer-to-Peer (P2P) Computing (P2PC),
  • resilience,
  • multimedia,
  • Multimodal User Interface (MUI),
  • Intelligent Personal Assistant (IPA),
    Autonomic Computing (AC),
    etc.,

    At first, we quote and comment works of the field of cybernetics related to morphogrammatics, kenogrammatics, and PolyContextural Logics (PCL).
    We also take a closer look on the Arrow System and show that it is merely PCL in relation to the TUNES OS.
    This leads to the

  • Symbol Grounding Problem (SGP) and Physical Symbol System Hypothesis (PSSH) on the one hand and
  • Binary-Relation Model (BRM) on the other hand.

    Regarding the first topic, we also quote a work about Dynamic Symbol System.
    Regarding the second topic we discuss BRM and related works of the fields of DataBase Management System (DBMS) and Multimedia System (MS).
    This also leads to technologies, applications, and services based on ontology and closely related to the Semantic (World Wide) Web (SWWW)

  • ontological Knowledge Base (oKB or ontoKB),
  • Multimodal Dialogue System (MMDS) or Mutlimodal User Interface (MUI), and

    Regarding to the PLC and the SWWW we get directly to the Dynamic Semantic (World Wide) Web (DSWWW).
    Over PCL or subjective logic, we also come to common sense computing and Cognitive AS, and further to Linked Data, and eventually to Knowledge Graph

    Correspondingly, the quoted and commented works include:

  • Kontextur Différance Kenogramm
  • Email of a student/researcher about Cybernetic Ontology and Transjunctional Operations
  • Introducing and Modeling Polycontextural Logics
  • Arrow System
  • Derrida's Machines
    • Part I Cloning the Natural, Proemiality, Panalogy, Parallelism
      • Cloning the Natural - and other Fragments
      • Some Applications
      • The new scene of AI: Cognitive Systems?
      • Non-Academic Projects
      • Some non-technical background texts to PCL
      • Exploiting Parallelism in PCL-Systems
      • Minsky's new Machine
      • Comparatistics of Models and Metaphors of Machines
    • Part II Dynamic Semantic Web
      • Wozu Dynamic Semantic Web?
      • Towards a Dynamic Semantic Web
      • Cybernetic Ontology and Web Semantics
      • Dynamic Semantic Web
      • Dynamics in Ontologies and Polysemy
      • From Metapattern to Ontoprise Ontologics
      • {maybe more}
    • Part III Fibonacci
  • More on Less: Further Applications of Ontologies in Multi-Modal Dialogue Systems
  • SmartWeb Handheld: Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services
  • Linked Data
  • Online encyclopedia about the subject knowledge graph 29th of June 2020 (first version)
  • Online encyclopedia about the subject knowledge graph 1st of May 2022 (actual version)
  • Towards a Definition of Knowledge Graphs

    We quote a webpage, which is about the debate about symbol vs. subsymbol, and symbol grounding, morphogrammatics, kenogrammatics, and PolyContextural Logics (PCL) and was publicated in 1991: "Kontextur Différance Kenogramm
    Dekonstruktive Bemerkungen zur Symbol-Subsymbol-Debatte in der KI
    Betrachtet man die gegenwärtige Entwicklung innerhalb der KI-Forschung, so läßt sich der Eindruck einer gewissen Stagnation nicht leugnen. Zwar verzeichnet die pragmatische und industrielle Applikation in Robotik, Rechnertechnologie und Design Erfolg um Erfolg, doch steht diesem ingenieurwissenschaftlichen "spill" auf der konzeptionellen und grundlagentheoretischen Seite keine entsprechende Dynamik gegenüber. Das "Heureka" der 50er und 60er Jahre, als man glaubte innerhalb von zwei Dezenien sämtliche menschliche Kognitionsleistungen auf dem Computer nachvollziehen zu können, scheint in weite Ferne gerückt, anstelle von Euphorie ist in die Labors der Cheftheoretiker längst die Politik der kleinsten Schritte eingezogen. Angesichts dieser Ernüchterung und im Hinblick darauf, dem ersehnten Durchbruch doch ein Stück näher zu kommen, scheint es sinnvoll, sich die beiden großen Paradigmata der KI zu vergegenwärtigen, und dies insbesondere unter dem Aspekt, der mit dem Aufweis ihrer jeweiligen Verankerung innerhalb einer bestimmten philosophischen Tradition, Transparenz für ihre speziellen Möglichkeiten und Grenzen zu erreichen vermag.

    Symbolverarbeitung Und Subsymbolismus - Die Klassischen Konzepte
    Standen am Beginn der 50ger Jahre noch zwei große Paradigmata in paritätischem Wettbewerb nebeneinander, so trat im Laufe der Zeit eines der Modelle nahezu vollständig in den Schatten des anderen: Das von Newell/Simon begründete Paradigma der Symbolverarbeitung (Physical Symbol System Hypothesis) hatte den Konnektionismus, der sich an der Modellierung des Gehirns orientierte, verdrängt.
    Die Grundannahme der PSSH ist das trotz verschiedener Architektur gleiche Funktionieren von Gehirn und Computer, wenn beide auf bestimmten Abstraktionsgraden als Repräsentations- und Relationsmechanismen von Symbolen verstanden werden. D. h. es wird als Prämisse stipuliert, daß Denken und Verstehen von Welt sich als intrasystemische Abbildung externer Daten ereignet. Dabei ist natürlich nicht an eine schlichte Eins-zu-eins-übertragung gedacht, sondern in der Tradition von Frege, Russell, Whitehead ein Transformationsgefüge impliziert, mit Hilfe dessen sich komplizierte und komplexe Inhalte in atomistische Komponenten zerlegen lassen, welche dann der Symbolverarbeitung zur Verfügung stehen. Das zentrale Stichwort ist damit in der Repräsentation zu sehen, unabhängig davon, wie fein und subtil diese im Einzelfall auch konzipiert sein mag.
    Den gänzlich anderen Weg schlug der Konnektionismus ein, wenn er genau umgekehrt nicht von der Architektur abstrahierte, um allein mit den repräsentierten Symbolen zu arbeiten, sein Interesse vielmehr der Architektur des menschlichen Gehirns galt. D. h. das Gehirn als neuronales Netzwerk wurde als Vorbild der Rechnerarchitektur genommen, um die im natürlichen Bereich vorgefundene Funktionweise zu modellieren. An die Stelle des manipulierbaren Symbols trat hier das Neuron, dessen laterale Anregung (oder Sedierung) innerhalb eines Netzes anderer Neuronen dann als Verhalten aufgefaßt wurde, womit dieses Netzwerk Verhaltensstrukturen aufweisen kann, die sich der expliziten Analyse a posteriori verweigern. Der Unterschied zwischen beiden Paradigmata läßt sich dahingehend präzisieren, daß ersteres ein statisches Modell letzteres als ein dynamisches begriffen werden kann. Denn auch wenn die Vertreter der Symbolverarbeitung nicht müde werden zu versichern, ihnen gehe es um die logischen Strukturen, die zur Modulation von Symbolen notwendig seien, läßt sich dieser Zugang nicht als strukturaler erfassen. Dies insofern, als die repräsentationelle Basis der PSSH sie zwangsläufig als eine substantialistische klassifiziert, d. h. die Grundvoraussetzung liegt in der unlöslichen Bindung an die Implementierung statischer Entitäten, auch wenn diese dann, logischen Strukturen folgend, manipuliert werden.
    Die Dynamik des Konnektionismus findet sich einerseits in der Freiheit, die man sich methodologisch gegenüber dem Endprodukt einräumt. Hier, wo man gegenüber der PSSH das Pferd gleichsam von hinten aufzäumt, geht es gerade nicht darum, eine logische Struktur zu finden/zu implementieren, welche einem bestimmten Problem gewachsen ist, sondern darum, zu ergründen, welche Art von System eine bestimmte Eigenschaft entwickeln kann. Zum anderen liefert der Konnektionismus ein gegenüber der PSSH dynamisches Modell, insofern hier in der Tat von einem strukturellen Zugang gesprochen werden kann, wenn neuronale Konnektivität vollständig von substantieller Inhaltlichkeit abstrahiert, um auf das rein funktionale Interagieren und wechselseitige Aktivieren innerhalb des Netzwerkes zu sehen. Als drittes läßt sich eine stärkere Dynamik in der systemimmanenten Anlage erkennen, welche nicht auf ein symbolgestütztes Problemlösen ausgerichtet ist, sondern ein eigenschichtliches Lernen der Maschine intendiert.
    Ist dies die große Alternative, in der die KI verfangen ist, so zeigt sich in der letzten Zeit eine zunehmende Hinwendung zum Konnektionismus, der über die Symbolverarbeitung in Gestalt des Neokonnektionismus, des parallel distributed processing [(PDP)], des biofunctional distributed learning and remenbering (BDLR), oder Hofstadters subkognitiver Mentalität statistischer Emergenz die überhand gewinnt.

    Der Philosophische Hintergrund Der Alternative
    a) Repräsentation und Identität
    [...]
    Eben dies tun Dreyfus und Dreyfus, wenn sie in ihrem Aufsatz "Schöpfung des Geistes oder Modellierung des Gehirns?" die beiden theoretischen Konzepte ihrer philosophischen Anbindung zuführen.
    Das hierbei zugrundeliegende Schema der Klassifikation verläuft entlang jener disjunkten Vorstellung, die einerseits an die vollständige Theoretisierbarkeit von Welt und Welterfahrung glaubt, bzw. andererseits sie in Abrede stellt, und die von dem Brüderpaar als die Alternative von Atomismus und Holismus etiketiert wird. Theoretisierbarkeit meint dann die grundsätzliche Möglichkeit einer Formalisierung allen intelligenten Handelns, meint die reduktionistische Beschreibung mentaler Vollzüge als eines komplizierten Relationsgefüges distinkter Elemente, wie es sich in seinen Grundzügen in der Leibnizschen Mathesis darstellt. Es handelt sich dabei insofern um eine atomistische Sicht, wenn eine ausreichende Akkumulation von Informationseinheiten sowie ein adäquates Regelsystems zu deren Manipulation hinreichend ist, um das solcherart systematisierte Wissen einer Operationalität zuführen zu können. Die tiefsten Wurzeln dieses atomistischen Repräsentationismus liegen in derjenigen Philosophie, die wie keine andere das Denken des Abendlandes geprägt hat, im Platonismus, dessen Konzeption von Idee und Teilhabe, von Urbild und Abbild die Grundlage jeglicher Repräsentationsvorstellung bildet. So wie dort die realen Dinge nur als defiziente Erscheinungsweisen der intelligiblen Ideen, d. h. ihrer Urbilder auftreten, setzt sich dieses Schema fort, wenn Aristoteles diese Relation auf die Zeichentheorie, also auf Sprache transponiert. Von hier aus durchdringt die Abbildungstheorie in mehr oder weniger komplexer Form die gesamte Philosophie
    Die Vorteile eines solchen Repräsentationalismus lassen sich deutlich in der relativ unproblematischen überführung in einen Formalismus erkennen, wenn die gesamte Welt als ein semantisches und syntaktisches System von Prädikaten und Verknüpfungsregeln Eingang in den Kalkül findet. D. h. der hohe Grad an Operabilität und eine dem Alltagsverständnis auf den ersten Blick plausible theoretische Basis, die mit der Subjekt-Objekt-Spaltung das augenscheinliche Verhältnis von Mensch und Welt, von System und Umgebung widerspiegelt, lassen den Atomismus als äußerst attraktives methotologisches Konzept eines maschinalen Nachvollzuges kognitver Leistungen erscheinen.

    b) Der Abschied von der Welt
    [...] die gesamte philosophische Tradition einer Revision zu unterziehen [...] Die dabei zu verfolgende Leitfrage zielt dann darauf, wie das Innen des erkennenden Subjektes beschaffen sein muß, um aus dieser Sphäre heraus überhaupt zu seinem als "Außen" konzipierten Gegenstand zu gelangen, und wie sich umgekehrt dieses "Außen" neu konstituiert, wenn die erkennende Instanz des Subjekts eine neue Figuration erfährt.
    Einer der Hauptangriffe der Heideggerschen Fundamentalontologie richtet sich gerade gegen die klassisch tradierte Subjekt-Objekt-Spaltung, innerhalb derer ein mit Bewußtsein ausgestattetes, erkennendes Zentrum einer von dieser Instanz unberührten und unabhängigen Sphäre objektiven Seins gegenübersteht. Dem klassischen Konzept zufolge existiert die Objektivität in stiller Genügsamkeit an sich, d. h. sie wird einerseits durch das erkennende Subjekt nicht affiziert, wie sie andererseits für alle Subjekte sich als die gleiche darstellt. Das Universum des Gesamtseienden ist unteilbar identisch mit sich selbst, es ist absolut und somit von jeder Stelle potentiell in gleichem Maße zu erschließen. Stellt sich nun aber die Frage, wie dieses externe Außen für das Subjekt zugänglich sein kann, bzw. wie das Subjekt den immanenten Raum seiner Erkenntnis öffenen oder verlassen kann, um sich Wissen über die Welt anzueignen, dann hüllt sich die Tradition entweder in Schweigen, erklärt dies mit Kantischer Resignation als unmöglich oder vollführt die wagemutigen Windungen des spekulativen Idealismus, der die wahre Welt kurzerhand aus dem Außen in das Erkenntnisvermögen selbst verlegt. Ist damit der Platonische Ideenhimmel in den Kopf des Philosophen transferiert, so bleibt die Frage, wie nun der Zugang zu der defizienten und minderwertigen Außenwelt vorzustellen sei weiterhin im Dunkel.
    Heideggers Kritik setzt genau am Konzept eines absoluten und als extern angelegten Weltbegriffs an, wenn er jenes Bild demontiert, das die Gesamtheit des Seienden in summa als Welt zeichnet. Er löst die Statik dieses rein additiven Begriffs auf, wenn Welt jetzt als das Medium erscheint, innerhalb dessen Seiendes von der Art des Daseins (Subjekt) erkennt, wie es sich zu sich selbst und anderem Seienden verhalten kann. Dies deshalb, da das Dasein sich in der Transzendenz über das Seiende erst selbst zu konstituieren vermag, weil es sich im Verhalten auf die Welt hin erfährt als Seiendes, das sich selbst als ein Selbst gegeben ist. Ist das Dasein sich aber selbst gegeben, bedeutet dies nichts anderes, als daß es um seiner selbst willen existiert. [...]
    Wenn somit gezeigt ist, daß zum einen das Dasein um seiner selbst willen existiert, daß zum anderen aber die Welt als das 'woraufhin' des das Selbst distingierenden überstiegs unlösbar zur Selbstheit gehört, dann geht aus der Zusammenschau dieser beiden Linien hervor, daß die Welt in ihrem Wesen auf Dasein bezogen ist. [...]
    [...]
    Eben weil aber das Dasein sich als Selbst erst konstituiert, indem es qua Transzendenz auf sich selbst zurückkommt, kann das Weltphänomen umgekehrt auch nicht unter den Objektbereich subsumiert werden, ginge damit das Dasein, das sich wesenhaft nicht den seienden Dingen zurechnen läßt, in der Ganzheit des Manigfaltigen auf.
    Dies ist die ambivalente Spannung, in der der Weltbegriff der "Kritik der reinen Vernunft" schon ansatzweise stand, und dieses Oszilieren, in dem die Welt zwischen 'Subjekt'/Dasein und der Objektsphäre/je vorhandenes Seiendes hin und her schwingt, wird bei Heidegger nicht nach der einen oder anderen Seite hin aufgelöst, sondern in ihrer ganzen Spannkraft aufgenommen und im Konzept des In-der-Welt-Seins installiert.
    [...]
    Entwurf von Daseinsmöglichkeit ist Entwurf des "Worumwillen" des Daseins, das "Umwillen" aber ist, wie oben gezeigt, der Grundcharakter von Welt, weswegen der ursprüngliche Entwurf der Möglichkeit des Daseins zusammenfällt mit dem "Entwurf von Welt".
    Die dem Dasein wesenhaft zukommende Struktur der Transzendenz, legt damit also den Grund dafür, daß Seiendes sich sowohl als Vorhandenes/Objekt wie als Dasein/Subjekt überhaupt zeigen kann. [...]
    [...]
    Erhebt sich nun die Frage nach den Konsequenzen dieser subtilen überlegungen für die KI, so läßt sich zunächst erkennen, daß die Hoffnungen einer auch nur annähernd vollständigen Wissensaquisition der Daten über die Welt nicht nur pragmatisch unlösbar ist, sondern grundsätzlich nicht möglich sein kann, da das geforderte Referenzobjekt "Welt" überhaupt nicht existiert, Welt vielmehr im Agieren und Interagieren des Systems für dieses je sich permanent generiert.
    Des weiteren, und hier wird das Konzept der identitätstheoretischen Repräsentation selbst demontiert, läßt sich das Postulat des atomistischen Symbols oder Zeichens nicht mehr länger aufrecht erhalten. Denn die Heideggersche Anerkenntnis der Welt als das "Um-willen" des Daseins kann als Extrapolation verstanden werden für eine neue, nicht mehr statische Semantiktheorie, sondern eine dynamische Erörterung der Frage der Sinn- und Bedeutungsgenese, des Semiologie.

    c) Das Denken wider die Identität
    Vorbereitet wird diese Sichtweise von Ferdinand de Saussure, der die arbiträre, aber noch immer substantialistische Zeichenkonstitution von Vorstellung und Lautbild überwindet, indem er erkennt, daß der Sinn eigentlich aus den Unterschieden zwischen den Zeichen gestiftet wird. D. h. nicht die Präsenz des Zeichens, sondern die zwischen den Zeichen wirkende Abwesenheit erwächst zum eigentlichen Katalysator der Semiosis, wenn allererst die Differenz der Zeichen gegeneinander ihnen ihre Identität verleiht. Es ist die gleiche Denkbewegung, der Heidegger folgt, wenn er seine Ontologie der Alltäglichkeit exemplifiziert. [...]
    Dieser Bewegung, die Heideggers Seinsanalyse durchzieht, und die innerhalb der Sprachphilosophie bei Saussure auftaucht, verleiht Jacques Derrida volle Geltung, wenn er ihr im Rahmen seiner Grammatologie in der Figur der différance Gestalt gibt. Die Schwierigkeiten, die immer wieder Anlaß zur Verunglimpfung Derridas geben, dürfen ihren Ursprung wohl in der äußersten Komplexion sowie in der Unmöglichkeit einer positiven Definition des Phänomens der différance finden. Daß sich die différance dieser positiven Prädikation verweigern muß, findet seinen Grund darin, daß sie das Geschehen auffängt, welches einerseits die Differenzen hervorbringt, wie andererseits deren Effekte zeitigt. Différance erscheint somit als der metaphysische Name einer Wirkung, einer Spendung, die sich selbst der Benennung entziehen muß, will sie nicht notwendig unter die durch sie ermöglichte Genese des Zeichens subsummiert werden. Verbleibt Saussure noch bei der alleinigen Feststellung, es gebe nur Unterschiede in der Sprache, so erwächst mit der différance ein Konzept der dialektischen Gründung dieser Unterschiede, läßt sich mit der différance das Geschehen denken, das, selbst über den Unterschied von Anwesenheit und Abwesenheit erhaben, die Gewähr gibt, daß Sinn sich aus dem Abwesen her generiert, um Anwesenheit und Präsenz Raum zu verleihen. Wenn die Logik des Unterschieds besagt, daß einen Unterschied zu markieren nur gelingen kann, wenn zuvor etwas als unterschiedlich erkannt wurde, und daß etwas als unterschiedlich zu erkennen des vorgängingen Unterschieds bedarf, dann besagt diese dialektische Gründung von Unterscheidung und Unterschiedenem, bzw. die Selbstreferentialität des Unterschieds eben genau die sich vor der positiven Benennung ihrer selbst zurückziehende Struktur der différance. Différance ist nicht, différance wirkt, beschreibt es Derrida als deutlichen Index für die asubstantielle Verfaßtheit dieser Dynamik, deren Eigenart es gerade ist, sich in ihrem Wirken selbst zu verbergen. Denn dort wo das Geschehen der différance sich einschreibt, entzieht sie sich der durch sie in sein Anwesen gelangten Präsenz des Sinnes, und bleibt noch hinter dem von der Präsenz verdrängten Abwesenden, das dennoch dessen Bedingung ist, als die Ermöglichung der dialektalen Vermittlung von An- und Abwesen virulent.
    Es zeigt sich somit, daß die eingehende Kritik der von Dreyfus/Dreyfus als Atomismus bezeichneten Postion der PSSH kontinuierlich in die Diskussion der philosophischen Grundlagen des konkurrierenden Ansatzes der Netzwerktheoretiker einmündet. Denn wenn dort die unit als nicht mehr zu hintergehende Bedeutungseinheit mit Hilfe der bedeutungsfreien microfeatures aufgebrochen wird, so entspricht dies deutlich der soeben skizzierten Tendenz, Sinn nicht mehr länger an die identische Trägerschaft des Zeichens zu binden, sondern seine ursprüngliche Lokation im Spiel der Differenzen, im dialektischen Gründen von An- und Abwesenheit, mithin nicht mehr länger im (identischen) Selben, sondern im Anderen auszumachen. Ebenso weist die Netzwerktheorie eine deutliche strukturelle Affinität zu den bei Heidegger, den Konstruktivisten sowie der second order cybernetics formulierten Vorbehalten gegen den Dualismus einer statischen und absoluten Welt auf, die dem ebenso statisch konzipierten Subjekt als das Objekt der Erkenntnis gegenübergestellt ist. Denn wenn der einmalige, umfassende und nicht hintergehbare Input des Wissensingenieurs durch die dynamische Konnektivität rekursiv interagierender Prozesse ersetzt wird, dann entspricht dies der Destruktution einer unumstößlich vorgegebenen Außenwelt durch die sich im Zusammenspiel von System und Umgebung je neu formierende Koppelung, welche dann immer für beide Komponenten strukturierende Funktion besitzt.

    d) Die Verankerung des Systems in der Welt
    Mit dieser wechselseitig das System wie dessen Umgebung aktual generierenden Koppelung ist aber nichts anderes angesprochen, als das Lernen des Systems, wenn mit Maturana/Varela darunter die Veränderung der Struktur des Systems verstanden wird, die zu einer Modifikation der Koppelung von System und Umgebung führt. Anders gewendet erscheint Lernen als viable Veränderung struktureller Koppelung dann als der andauernde Vollzug der Autopoiese des Systems, wird somit zu einem rein dynamischen Paradigma, das der statischen Komponente eines Gedächtnisspeichers nicht mehr bedarf. Insofern Lernen die gesamte Struktur des Systems betrifft, d. h. nicht mehr mit der Vorstellung einer engrammatischen Fixierung in einem subsystemischen Speicher verknüpft wird, läßt sich mit Heinz v. Foerster dann umgekehrt postulieren, das Gedächtnis sei überall. Gedächtnis wird damit zur Metapher für den jeweiligen Grad interner Struktur, der es dem System ermöglicht, sich unter Aufrechterhaltung seiner Organisation, in seiner Umgebung zu orientieren. Das aber bedeutet dann, daß autopoietische Systeme notwendig in der Gegenwart leben, sie somit nicht erinnern können im Sinne eines Rückgriffs auf Vergangenes, sondern Erinnerung nun als die je vollzogene Aktualisierung struktureller Möglichkeit zur Koppelung hinsichtlich ihrer rekursiven Modifikation, d. h. der Fortsetzung ihrer Autopoiese, erscheint.
    [...]

    e) Die reflexiontheoretische Fokussierung
    [...]

    f) Reflexion ohne Objekt!?
    Ist damit nun auf analytischem Weg der Subjekt-Objekt-Dualismus als ein historisches Phänomen zu den Akten gelegt, so stellt sich die Frage, inwieweit das von Günther verfolgte Programm einer Reflexionstheorie überhaupt noch Bestand haben kann, bedeutet Reflexion doch immer die Bezugnahme auf einen Prozeß des Abbildens, Vorstellens oder Meinens. [...]

    f) Die klassische Unmöglichkeit des anderen Ich
    Das zugrundeliegende Problem der Subjekt-Objekt-Spaltung besteht in der nicht zu gewährleistenden Identität des Subjekts, das sich selbst denkend gegenübertritt, bzw. umgekehrt in der Unmöglichkeit eines Objekts, soll die Identität aufrecht erhalten werden. [...]

    g) Die reflexionslogische Umkehrung: Konstruktion statt Deskription
    Somit ist hinreichend deutlich, daß sowohl Heideggers wie auch Günthers Ansätze sich nicht als postume Lösungen innerhalb des klassisch determinierten Denkschemas bewegen, sie vielmehr auf einer vollständig anderen Ebene situiert sind, wenn sie gerade nicht auf die Adäquatheit von Dualismus oder Anti-Cartesianismus zielen, sondern diese Problemstellung selbst erst als Epiphänomen auffassen. Für Günther bedeutet dies, daß es sich nicht um den Entscheid für die eine oder andere Position handelt, daß seine Reflexionslogik vielmehr die Bedingungen beleuchtet, auf denen sich eine solche Alternative allererst entwickeln kann. Die Frage, die es zu stellen gilt, kreist also nicht mehr um die Kompatibilität einer Reflexionslogik bzgl. des neu gewonnen Erkenntnisstandes, sondern zielt auf die Bedingungen der Möglichkeit, daß sich solche Probleme überhaupt ausformulieren können. [...]
    Thematisiert diese "Logik zweiter Stufe" aber nun Widerspruchsfreiheit, Entscheidbarkeit oder Vollständigkeit logischer Systeme, so ist ihr eigentlicher Gegenstand der gesamte Reflexionsbereich der formalen Logik, womit sich als das Objekt dieser Logik nicht die Objektsphäre der (klassischen) Logik erkennen, sondern deren Prinzipien selbst erkennen lassen. Diese lassen sich, da sie keinen Bezug zur objektiven Welt des Seins mehr/noch nicht haben, dann nicht mehr im klassischen Raster von "wahr" und "falsch" auffangen, hier ist der Ort betreten, an dem Strukturen sich ins Verhältnis zu Strukturen setzen, an dem Form selbst ihrer Formalisierung zugeführt wird. [...]
    [...]

    Pragmatische Probleme Des Subsymbolismus: Formalisierung
    Aus dem bisherigen Durchgang dürfte hinreichend deutlich geworden sein, inwieweit eine dem Repäsentationsdenken verpflichtete, identitätstheoretisch fundierte KI in ihren Entwicklungsmöglich- keiten an grundsätzliche Grenzen gerät. [...] auch der Neokonnektionismus kann nicht der Frage entkommen, auf welchem Weg Konzepte gespeichert werden, wie sich Repräsentationen nicht-symbolisch erklären und verarbeiten lassen. Der hier eingeschlagene Weg führt, wie bereits angedeutet in einen subatomaren Bereich, wenn man das Symbol bzw. die unit als atomistisches Elementarteilchen versteht.
    Ist die unit identischer Träger eines Konzepts, d. h. handelt es sich um eine streng lokalistische Repräsentation, dann bricht die Netzwerktheorie diese lokale Unität auf, um Bedeutung und Repräsentation als die jeweilige Aktivierung, Verstärkung, Abschwächung unterschiedlicher und für sich semantisch bedeutungsloser microfeatures zu erfassen. Repräsentation ereigenet sich somit als Zusammenspiel subsymbolischer Einheiten, die über das Netzwerk verteilt erst in ihrem speziellen Interagieren sematischen Gehalt, symbolischen Charakter annehmen können. Zeichentheoretisch bzw. grammatologisch sind dies die Strukturen, wie sie von Saussure und Derrida beschrieben werden, die sich aber dort der Sprache selbst entzogen. Wie also, so stellt sich das Problem zugespitzt dar, läßt sich das als Spiel der Differenzen metaphorisierte Geschehen der différance, das selbst die Grenzen der Sprache markiert, in einen Kalkül überführen? Denn Formalisierungen arbeiten als Abstraktionsstufen immer noch auf der Basis der Semantik, d. h. sie sind grundsätzlich immer noch repräsentational. Prägnant gefaßt ließe sich die Forderung somit als die nach einer repräsentionsfreien Repräsentation fassen, die jedoch auf dem Boden der klassischen Logik, die immer eine Wertlogik ist, nicht konzipierbar ist.
    Was somit gefordert ist, wäre eine parallele Vorgehensweise zu dem von Günther vollzogenen Schritt, die Objektgebundenheit der Reflexion zu überwinden, wäre ein Kalkül aus dem sämtliche Objektivität ausgemerzt ist, der sich als reiner Strukturenkalkül über Strukturen definiert. Solche Ansätze die Wertgebundenheit der Logik zu verlassen, finden sich etwa mit dem calculus of indication von George Spencer Brown, oder dessen Erweiterung durch Francisco Varela. Dabei jedoch vernachlässigen sie eine andere fundamentale Bedingung von Kognition, wie sie sich in der différance ankündigt, die Selbstreferentialität, auch wenn Varela seinen Kalkül als calculus for selfreference deklariert.
    Selbstrefrentialtität sprengt jeden klassischen Kalkül, da zirkuläre Strukturen den Tod der Logik bedeuten. Somit stellt sich für einen adäquaten Kalkül die doppelte Forderung, sowohl repräsentationsfrei "repräsentieren" zu können, als auch Selbstbezüglichkeiten antinomiefrei abbilden zu können.

    Negativität Und Polykontexturalität
    a) Jenseits der Sprache
    Das Denken, das die Ermöglichung von Sinn in der différance mit ihrem wechselseitigen Gründungsverhältnis von Anwesenheit/Abwesenheit erkennt, hat sich jäh dem Paradigma der Identität, der Präsenz entledigt. Doch die Sprache, in der sich dieser Prozeß der Entfremdung und Verabschiedung vollzieht, ist immer noch die der Repräsentation, der Positivität, aus der auch die von Heidegger und Derrida vollzogenen Durchstreichungen und mittels Anführungszeichen angestrebten "Uneigentlichkeiten" kein Entkommen leisten. D. h. die Sprache, die hier als Positivsprache auftritt, stößt an ihre Grenzen, wenn sie den "salto mortale" des Ausbruchs aus ihrer eigenen logozentristischen Bedingtheit mit den ihr immanenten, metaphysisch belasteten Mitteln versucht.
    Somit erwächst die Forderung, der in ihren Möglichkeiten unzulänglichen Positivsprache zu entkommen. Es bleibt also der Anspruch Günthers gerechtfertigt, der eben diesem Dilemma zu entkommen sucht, indem er eine Sprache konzipiert, die nicht mehr auf das positive Sein referiert, die Negativsprache. Diese erschöpft sich nun nicht darin, eine künstliche Sprache zu sein, die den natürlichen Sprache gegenüber gestellt würde, denn auch künstliche Sprachen bleiben dem Konzept der Positivsprache verhaftet. Auf der anderen Seite heißt Negativsprache aber auch nicht, Umgangs- oder Positivsprache zu formalisieren und dem Gesetz der Zahl zu unterwerfen, vielmehr sollen die Bedingungen der Möglichkeit von natürlicher und künstlicher Sprache überhaupt eingeschrieben werden.
    Wenn nun also Negativsprache darauf zielt, die Bedingung der Möglichkeit von Sprache überhaupt, die verdrängte Genese der Semiosis darstellbar zu machen, und wenn als diese Bedingung die Differenzierung erkannt wurde, dann muß das Hauptinteresse eines solchen Zugangs also in der Abbildung der Differenz, in der Darstellung des Prozeses der Differenzierung, der différance liegen. Anders gewendet heißt dies, daß die hier zu betretende Ebene der Negativität sich endgültig von jeglicher Substantialität und Statik des Seins verabschiedet, an deren Stelle nun eine Dynamik tritt, die allein im Stande ist, die dialektische Wechselbewegung der différance aufzufangen. Damit ist aber gleichzeitig deutlich, daß eine solche Darstellung immer eine strukturelle sein muß, die sich gemessen am Strukturalismus Saussures dann als ein Suprastrukturalismus erweist, da die hier aufgezeigten Strukturen jenem von Saussure aufgezeigten Differenzsystem allererst zugrunde liegen.

    b) Kontextur und Proemialität
    Als ein erster Schritt in diese Richtung soll daher der Blick auf die Günthersche Relationslogik gelenkt werden, wonach eine Relation aus den beiden Relationsgliedern von Relator und Relatum besteht, die auch als Operator/Operand auftreten. Dabei stehen Operator und Operand in einem eindeutig gerichteten Ordnungsverhältnis, das relationsintern absolute Gültigkeit besitzt. Allerdings erfährt diese Hierarchie interrelational eine Relativierung dahingehend, daß der Operator einer Relation in Bezug auf eine andere Relation als Operand erscheinen kann, ebenso wie dieser Umtausch für den Operanden der ersten Relation gilt. Somit läßt sich für zwei Relationen bzgl. ihrer Operatoren/Operanden insgesamt betrachtet ein Verhältnis von sowohl Ordnungs- wie auch Umtauschbeziehungen feststellen. Relationsintern besteht ein eindeutiges Ordnungsgefüge, während zwischen den jeweiligen Relationen hinsichtlich ihrer Operatoren/Operanden ein Umtauschverhältnis herrscht. Dieses komplexe Zusammenspiel von Ordnung und Umtausch wird von einer eigenständigen transklassischen Relation geregelt, die Günther unter dem Namen Proemialrelation einführt. Proemialität kann also als jene Eigenschaft bzw. als jenes Verhältnis verstanden werden, das erlaubt, hinsichtlich verschiedener Bezugssysteme ein und dasselbe Datum in verschiedener und nun funktionaler Rolle zu erfassen. Was in Bezug auf die eine Relation als Relator auftritt, gilt der anderen als Relatum und (dann allerdings zwangsläufig) umgekehrt. Damit ist aber gleichzeitig offenbar, daß hiermit der Rahmen der Monokontexturaltät verlassen ist, insofern Diskontexturalität die notwendige Bedingung dafür ist, daß das klassische Identitätstheorem widerspruchsfrei außer Kraft gesetzt werden kann, wobei dieses Außer-Kraft-Setzen sich nur auf den interkontexturalen Raum bezieht. Denn an dieser Nahtstelle zwischen den Relationen bzw. Kontexturen, an der die Proemialrelation das Umtauschverhältnis regelt, kommt es zum Bruch mit der klassischen Logik, insofern ihre trinitarische Gesetzgebung hier unterwandert wird. Proemialität erweist sich also als ein, wenn nicht der Fundamentalbegriff der Polykontexturalitätstheorie, insofern es mit seiner Hilfe möglich ist, jene die Eindeutigkeit der klassischen Logik überfordernde überdetermination begrifflich klar ohne jegliche Ambiguität zu erfassen, die als das entscheidene Abgrenzungskriterium transklassischer Perspektive gesehen werden kann. Proemialität taucht dann als das entscheidene Instrument auf, diese überdeterminationen konzise und konsistent behandeln zu können, wird somit zu jenem Element, das innerhalb der Polykontexturalität die entscheidende Dynamik ins Spiel bringt, die das Identitätsdenken überwindet. überdetermination, d. h. die Eigenschaft, daß ein und dasselbe Datum simultan zwei verschiedene und innerhalb einer Kontextur betrachtet widersprüchliche Funktionen erfüllen kann, ist aber die notwendige Voraussetzung einer möglichen Abbildung von Dialektik. Dies insofern als die wechselseitige Gründung von Operator/Operand einem linearen und monokontexturalen Denken verschlossen bleiben muß, da es auf dem Boden der klassischen Logik nicht möglich ist, ein Sowohl-als-auch auszusprechen, vielmehr der Satz vom ausgeschlossenen Dritten einen einmaligen und nicht reversiblen Grundentscheid in die eine oder andere Richtung fordert. Was einmal als Operator deklariert wurde, muß diese Rolle für alle Zeiten weiterspielen, womit für den hier interessierenden Kontext entweder das Unterscheidende als vorgängig erkannt wird, das dann die Bedingung des Unterschiedenen darstellt, oder umgekehrt sich die Verschiedenheit erst aufgrund der unterschiedlichen Entitäten einstellt.

    c) Die Selbstbezüglichkeit der Unterscheidung
    Kontextural vermitteltes Ordnungs- und Umtauschverhältnis von Operator und Operand stellt aber für sich genommen nur ein funktionales Schema dar, das sich als Denkfigur zwar durchaus als fruchtbar erweist, die Mechanizität der différance zu erfassen, das sich jedoch wie gesehen noch vollständig auf dem Boden der Positivsprache entfaltet. Es bedarf also weitergehend eines Transfers dieses Schemas, auf jenen der Positivität und Identität des Seins sich entziehenden Bereich, aus dem heraus sich die différance, chora etc. speisen. Transformation und Transposition des beschriebenen Schemas auf die Dimension der Negativität heißt solcherart, die Gründung und Applikation dieser Mechanizität in einem Rahmen, in dem nicht mehr positives Sein, nicht mehr klar bestimmbare Entitäten begegnen, in der nicht einmal mehr auf das kleinste reduzierbare Substantialitäten der Positivsprache Statt haben, die in ihrem Differenzgehalt allererst unterschieden werden könnten. Totale Reduktion des Seins heißt dann aber in letzter Konsequenz, auch Abschied nehmen von der letzten Bastion der Positivität im logischen Kalkül, heißt Abschied nehmen von der dort tradierten Wertbelegung.
    Abstraktion von jeglicher Wertbelegung des Formalismus gilt dann als das Vordringen auf eine Ebene, die präsemiotisch und prälogisch als reiner Strukturbereich das Zusammenspiel und Funktionieren von non-designativen Leerstrukturen umfaßt, welche sich als Suprastrukturen demgemäß nicht mehr in der Dichotomie "wahr-falsch" wiederfinden. Dieser Bereich, der sich nach Abstraktion von jeglicher Wertbelegung zu erkennen gibt, koinzidiert aber trotz allem nicht mit dem reinen Nichts, insofern er anders als dieses nicht als isomorpher Gegenbegriff zum reinen Sein erscheint. Sind Sein und Nichts als isomorphe Dimensionen strukturell identisch, so gilt es, mit der Negativität eine Sphäre zu begreifen, die sich dem dualen "anti" von Sein und Nichts dahingehend widersetzt, als sie, einem "trans" folgend, dieses bipolare Raster insgesamt verläßt.
    Gilt der Strukturalismus als ein System von Differenzen, das deren Spiel jedoch in der Analyse positiver Distinktionen erkennt, so kann aus der Analogie eines sich hier abzeichnenden "Strukturalismus des Strukturalismus" gesagt werden, daß dieser sich als ein Differenzsystem eines Differenzsystemes darstellen muß. D. h. wurden zuvor Werte erst aus ihrem gegenseitigen differentiellen Gehalt ermittelt, bwz. traten zuvor Werte in Differenz, so gilt es nun nach Abstraktion von diesen Werten, Differenzen selbst in Differenz zu setzen. Hier dann erst begegnet eigentlich das Spiel der Differenzen, insofern sich hier allein Unterschiede als Unterschiede gegeneinander unterscheiden. Stehen sich aber einzig Differenzen als unterschiedliche gegenüber, so ist damit evident, daß hier endgültig jegliche Positivität verlassen ist, kann Differenz doch immer nur asubstantiell und negativ als das jeweilige "Nicht" eines anderen erfahren werden.

    Kenogramm Und Form
    a) Non-designativer Formalismus als Bedingung operationaler Dialektik
    An dieser Stelle nun, an der es um die positive Darstellung der Differenzen von Differenzen geht, wo also die positive Abbildung im Bereich der Negativität gefordert ist, erhebt sich die Frage nach der Form einer solchen Abbildung. Es ist dies dann aber nicht allein die Frage nach der adäquaten Form der Abbildung, sondern die Frage nach der Form selbst, nach der Form der Form. Denn offensichtlich erweist sich diese Frage nach der Form als die Transformation des ursprünglichen Formbegriffs, insofern ihre Situierung im Bereich der Negativität die vollständige Ausmerzung ihres Pendants, des Inhalts, der Substanz bedeutet. Der hier intendierten Form steht kein zu formendes positives Material mehr zur Verfügung, sie läßt sich allein als das formale Geschehen an Negativitäten erfassen. D. h. die Frage nach der Form der Abbildung erweist sich als die Frage nach der Form der Form, die als eine operationale Notation gefordert ist, innerhalb derer kein positives Datum mehr Statt hat, in der Differenzen als Differenzen eingeschrieben werden, in der somit ein Nichts eingeschrieben wird, das nicht nichts ist.
    Der Ort dieser Einschreibung und sein Griffel finden sich dann in der von Günther konzipierten Kenogrammatik bzw. dem Kenogramm. (griech. kenos = leer) Dabei wird unter einem Kenogramm eine Leerform verstanden, die die Fundierung der die klassische Logik gründenden Wertbelegung vollzieht, indem sie gerade von dieser Wertbelegung absieht, sie also jenseits der Wertdualität "wahr-falsch" angesiedelt ist. Solcherart bereitet Kenogrammatik den Raum innerhalb dessen sich die Differenz notieren läßt als der reine Unterschied zweier Kenogramme, ohne dabei jenem infiniten Regreß der Selbstbegründung zu erliegen, wie er sich unausweichlich einstellt, sucht man die Differenz als Differenz im Bereich der Positivität zu erfassen. Drängt sich dort nämlich unumgänglich die Frage nach dem Identität generierenden Konzept der zu unterscheidenden Entitäten auf [...], so führt dies zwangsläufig wieder zurück auf jene klassisch-logisch nicht zu bewältigende Zirkularität, die sich auf dem Boden des Ursprungsdenkens ergeben muß.
    In der Notation zweier unterschiedlicher Kenogramme entfällt jedoch das Problem eines solchen Konzeptes, da ihre Funktion allein darin besteht, das jeweilige "nicht" gegenüber dem Anderen zu markieren, wobei sie im gleichen Moment - von jeglicher Substantialität befreit und im proemialen Umtausch situiert - auch über den Verdacht der an dieses Konzept geknüpften Identität erhaben sind. Damit erscheint eine Kenogrammkomplexion, der Günther den Namen Morphogramm gibt, dann eigentlich als Einschreibung des Unterschiedes, der différance in ihrem gedoppelten Gehalt. Denn nun ist allerst die Möglichkeit gegeben, Unterschiedenes und Unterscheidendes in eine Form zu bringen, die nicht mehr der Frage der Vorgängigkeit des einen oder anderen unterliegt. Wenn das bloße Anderssein gegenüber einem Einen notiert werden kann, ohne dabei auf eine präsupponierte substantielle Identität zurückzugreifen, dann gelingt es, die Frage nach dem Ursprung zu überwinden, insofern Operator und Operand proemiell vermittelt beliebig ihre Rollen wechseln.

    b) Komplexität, Tabularität, Rekursivität, Non-Designation; eine alte Frage in neuem Licht
    Die Abstraktion von jeglicher Wertbelegung ermöglicht somit einerseits, daß sich Unterscheidendes und Unterschiedenes eben nur als bloß Unterscheidendes und Unterschiedenes begegenen können, ohne sich aufgrund einer vorgängingen Unterscheidung erst zu konstituieren. Hiermit wären alsodie positivsprachlichen Ein- und Umgrenzungsversuche Derridas in die konsistente Form der Morphogrammatik überführt. Andererseits bedeutet eine solche Struktur von Leerformen, innerhalb derer die monokontexturale Starrheit des Identitätstheorems sowie des Satzes vom ausgeschlossenen Dritten zugunsten einer proemiell vermittelten Dynamik hinsichtlich Operator/Operand aufgegeben ist, daß sich die selbst für den so abstrakten Formalismus Spencer Brown's noch unüberwindlich stellende Frage der überdetermination und Identität nun in einem das Ursprungsdenken endgültig verabschiedenden Formalapparat aufheben läßt. Denn benötigte Spencer Brown den in den infiniten Regreß führenden Ausweg des re-entry, um die Selbstreferentialität der Unterscheidung zu gewährleisten, so bietet die Proemialität von Operator/Operand, Unterscheidendem/Unterschiedenem hier erstmals die Möglichkeit, einer Linearität sowie temporaler Sukzession vollständig zu entkommen, um an deren Stelle eine wechselseitige Gleichursprünglichkeit zu installieren, und zwar in einer begrifflich und methodisch konsistenten Form.
    Ist dies der Weg, Selbstbezüglichkeiten antinomiefrei darzustellen, so liefert die von Günther eingeführte Kenogrammatik den Kalkül, der, ohne selbst repräsentational zu sein, die Genese der Repräsentationen, die Semiosis darzustellen vermag. Hier also ist der Ort, der die eingeforderte Syntese zu leisten im Stande ist, wenn Kenogrammatik unter Absehen von jeglicher Wertbelegung, D. h. unter letztmöglicher Austilgung von Objektivität sowie gleichzeitiger durch Diskontexturalität gewährleisteter polylogischer Verteilung in der Lage ist, das Sowohl-als-Auch eines nicht wertbehafteten, repräsentationsfreien und gleichzeitig selbstreferentiellen Kalküls zu erbringen. Ist die Kenogrammatik der Weg zu einer nicht-repräsentationalen Repräsentation, so liefert das Abgehen vom überkommenen Aristotelischen Konzept der Monokontexturalität, die Polykontexturaität, das notwendige Werkzeug zu einer antinomiefreien Formalisierung der notwendigen Selbstreferentialität. Polykontexturalität bedeutet dann Tabularität an Stelle von Linearität, Heterarchie an Stelle von Hierarchie, Selbigkeit von Andersheiten an Stelle von Identität, bedeutet nichts weniger als eine neue, nicht mehr klassische, sondern transklassische Rationalität.
    Wie bis hier her gezeigt wurde, ist mit der Frage "Was ist Kognition?" eine der Grundfragen der Philosophie angesprochen, wenn sich darin die Frage nach den Bedingungen der Möglichkeit menschlicher Erfahrung von Welt verbirgt, d. h. in welchem Verhältnis sich Denken und Sein verstehen lassen. Wenn hierbei die polaren Antwortversuche von Abbild und Konstruktion, von Engramm und parallelem verteiltem Prozess sich unversöhnlich gegenüberstehen, dann bedarf es aber einer grundsätzlichen anderen Perspektive, die die Immanenz der verschiedenen Ansätze verläßt und auf deren Ermöglichungsgrund selber reflektiert. An welchen Stellen und mit welcher Valenz die Güntherschen Ansätze einerReflexionstheorie, Polykontexturalitätstheorie und Kenogrammatik hier ergänzend und korrigierend eingreifen können, mag aus der skizzenhaften Darstellung ein wenig deutlich geworden sein. Ihre Verifikation ist das Projekt laufender Forschungsarbeit, die sich dann als der maschinale Vollzug kognitiver Strukturen erweist.

    Diese Arbeit wurde mit Mitteln der Volkswagen-Stiftung unterstützt.

    [...]"

    Comment
    All right? Not really. :D
    Honestly, we never bother about all those debates, insights, philosophies, concepts. Much talks, few solutions, no risks to make decisions. Instead we prefer to do it right in practice just right from the start. In fact, the creation of The Proposal took some months and it nailed it down to 100% and even added so much more without knowing all the works quoted herein at all.
    No wonder that founders and internationally most recognized leading pioneers of the fields of Artificial Intelligence (AI) and Cybernetics went rampant and had nothing else to do since at least the year 1998 than to steal our Evoos.

    The quoted work was publicated in the collective document titled "Kybernetik und Systemtheorie - Wissenschaftgebiete der Zukunft?" and publicated in the year 1991.

    The quoted work also confirms that there was no hype in the field of Artificial Intelligence (AI) and Cybernetics at that time, and both were very exotic fields and even career killers, as we call the situation.

    We got a different point of view, which in the tradition of our working philosophy of integration, unification, and fusion even brings both views of objectivity and subjectivity together by saying all and nothing and all inbetween is possible and by showing how to.

    Eventually, it ends up in a fractal and infinite regress(ion) (see also the Originals and Pictures of the Day of the 7th of May 2012).
    Therefore, we took a shortcut and defined the universal fractal as Zero Ontology or Null Ontology or Ontologic Zero or Ontologic Null, and structure all the rest around it by power set, hyperset, hypergraph, or whatever fits.
    Et voilà no high intellectual blah blah blah needed anymore, but common high technological action realized straight out of the void or kenos.

    This led us to the

  • Physical Symbol System Hypothesis (PSSH) and
  • Symbol Grounding Problem (SGP).

    See also the documents Physical Symbol System Hypothesis (PSSH) and Symbol Grounding Problem (SGP), and the email of a student/researcher quoted below, which also mentions the linguistic problem.

    This is the point, where we often begin to talk about classical logics and non-classical logics, cybernetical logics (e.g. polylogic, PolyContextural Logic (PCL) or subjective logic, arrow logic, fibring logic, holologic, etc.), many-valued logics, specifically three-valued logic and Fuzzy Logic (FL) as its expansion, modal logic, and also Multi-Dimensional Dynamic Logic (MDL) Programming (MDLP) (see for example MINERVA), graph logic, rewriting logic, and end with computing with words, literate programming, and complexity from the point of view of Algorithmic Information Theory (AIT), as well as unorthodox computing paradigms, such as Interactive Turing Machine (ITM) respectively Turing Machine with interactivity, Turing Machine with Input and Output (TMIO) or Interaction Machine (IM), Holonic Agent System (HAS), Multi-Agent System (MAS), etc..

    Indeed, this oscilation between the maxima can be realized with a three-valued logic and Fuzzy Logic (FL) (binary, binary to integer, and decimal notation forms):

  • no 00 or 0 or 0
  • maybe not 01 or 1 or 0.5
  • maybe 10 or 2 or 0.5
  • yes 11 or 3 or 1

    and further

  • no 000 or 0 or 0
  • less maybe not 001 or 1 or 0.25
  • less maybe 010 or 2 or 0.25
  • maybe not 011 or 3 or 0.5
  • maybe 100 or 4 or 0.5
  • more maybe not 101 or 5 or 0.75
  • more maybe 110 or 6 or 0.75
  • yes 111 or 7 or 1

    And so on. We can always split the intervals in the middle by adding related values, which always end with a 5 in the decimal number system. For sure, one can also say something with false and true or by using other symbols.
    The rest is possible by the reflective property of the Abstract Machine (AM) respectively Virtual Machine (VM) and the dynamics of the universe, which provides the energy to run the underlying hardware.

    The point with PolyContextural Logic (PCL) or subjective logic is that we can use Fuzzy Logic (FL), specifically in conjunction with a Distributed operating system (Dos), Holonic Agent System (HAS), or Multi-Agent System (MAS), or any other system, that works in parallel, concurrently, or simultaneously respectively provides the possibility to switch between a local/global point of view, which is typical for fibering logic and hence for PCL, to give a trustworthiness to everything in our ontological knowledge base, it is just another attribute designated as a human would call it (e.g. something with truth, trust, belief, and so on). So one can

  • trust or belief in something, which has a truth, trust, belief value of 1, or
  • not trust or belief in something, which has a truth, trust, belief value of 0, or
  • maybe trust or belief in something, which has a truth, trust, belief value of 0.5, or
  • maybe not trust or belief in something, which has a truth, trust, belief value of 0.5.

    All the other infinite cases of trust or belief in something can be given any real number as truth, trust, belief value, so we highly recommend to take the set of rational numbers.

    Another possibility is to generate an agent for each contexture for example of a reflective distributed Multi-Agent System (MAS) or Distributed Holonic Agent System (DHAS), as done with our reflectional interacting respectively reflective interactive Evoos and also explained in the document titled "Derrida's Machines Part II [] [...]" and elsewhere in relation to reflectivity and interaction, and parallelism, as well as concurrency.

    Furthermore, if a context is based on only half-truths and even false assumptions, then the whole inference or conclusion is worth more or less nothing. Even more fatal in general and for the Semantic (World Wide) Web (SWWW) in particular is the further use of such a wrong conclusion. That already is Nasty Artificial Intelligence (NAI) and can only lead in human extinction. Linked Open Data (LOD) is one of the best examples how NAI entered our civilization, because our works are not included, but substituted with wrong and even manipulated contents.

    The point with the Fibring Logic is that it simply reduces to some kind of Logic Programming (LP), Natural Language Processing (NLP) and Natural Language Understanding (NLU), and also Common Sense Computing (CSC), because the mixtures of logics are written in a formula, which at least must make (common) sense to a formal system and a human, and therefore we simply have sentences, which are expressed or written in a mathematical language based on an ordinary grammar just only to be shorter than completely formulated sentences (see quote of the email above again).

    This again leads to the point, where we often begin to talk about ... (see above). :) But at this point we are also running in a circle like the others and the question is once again how small is one of the smallest circles from the point of view of AIT and how practical is a circle, so to say.
    Of course, we do know since for example in the field of

  • Theoretical Computer Science (TCS) respectively informatics the proof of the undecidability of the halting problem according to Alan Turing, and
  • naive set theory the Russell's paradox respectively the barber paradox, and the Curry's paradox or Löbs paradox,
  • set theory the second Cantor's paradox, but also
  • modern logics the Gödelsche Unvollständigkeitssatz==Gödel's incompleteness theorems, and
  • quantum mechanics the Heisenbergsche Unschärferelation==Heisenberg's uncertainty principle,

    that there is no one single solution, if at all, but at the end of the day and also in the rest of the time, when we do not sleep or are otherwise unresponsive, we have to make a choice. And the best choice is neutral, rational, true, safe, secure, flexible, and inexpensive, always fits, makes happy, and does not destroy anything to be able to go back in case of error.
    So here it is with our OS.

    which are related to our

  • Evoos,
  • Cybernetics of the second generation, identity,
  • (cybernetic) self-portrait, self-augmentation, and self-extension,
  • Digital Physics and Rechnender Raum==Computing Space,
  • hypercomputing, and
  • Caliber/Calibre.

    One can see once again how everything fits together seamlessly and that our Evoos truly was an inspiration and includes the foundation for many other endeavours that followed, including our OS.
    Indeed, we observed how our works was publicated by other entities through all the years and then we noticed that our Evoos is the origin and if we show that Evoos was the start of our OS, then the other entities could not argue to their advantage anymore.
    In addition, one will also note what C.S. did, that there is so much talk, but so few solutions, so that eventually we said take Descarte, Gödel, etc., etc., etc., away and just keep and use what must exist, that is a dyamic or adaptive network or graph and keep it minimalistic. And obviously, the result is working since more than 20 years now for the worldwide public.

    See also the Clarification of the 18th of July 2021 and ... about reactive, (sequencing,) and deliberative, and hybrid agent architectures.

    The proemial relationship undermines the trinitarian axiomatics and in this way the structural definition of the truth. "Was Wahrheit selber ist, wird strukturell durch die klassisch-trinitarische Axiomatik von Identität, verbotenem Widerspruch und ausgeschlossenem Dritten definiert. Diese drei Axiome bestimmen den Begriff der Wahrheit als formal geschlossenes (aber inhaltlich unendliches) System.", [Gotthard Günther: Dieser Substanzverlust des Menschen. undated but after 1950]]
    So where is the circular tautology, the statement that is true in every possible interpretation, if there is no truth at all?
    And what is true that G. Güther said?

    And then we find the company Volkswagen once again.

    We quote an email, which is about the fields of Many-Valued Logics (MVL), simultaneity, morphogrammatics, kenogrammatics, PolyContextural Logics (PCL), and transjunction, and was on the 14th of October 1995: "Re: Simutaneity
    [...]
    To continue my reply to Don Miculecky's question:
    >> .... a living or cognitive system could be approximated as a network
    >> of event sequences where each event may or may not have *simultaneous
    >> translations* within the operation of neighbouring event sequences.
    >> It is precisely these "simultaneous translations" which cannot be reduced
    >> to a mechanistic system, and they are necessary if a system is to be "alive".
    >
    >This is very suggestive....can you tell us more about this idea?
    I first came across this idea from a rather unusual and difficult paper by the late German (or Austrian?) philosopher Gotthard Guenther: "Cybernetic Ontology and Transjunctional Operations". (I think it originally appeared in 1960 or so). His idea was that 2-valued logic was not sufficient to describe a system interacting with its environment. For an outside observer, an event may constitute "information". From the point of view of a living system, this "information" takes the form of a "disturbance" which does NOT belong to its (2-valued) "universe" or current context. His argument was basically: we need two values to define/specify the whole system. For its environment (that is seen from THE SYSTEMS's point of view, not ours) we need a third!
    The third logical value should refer to a simultaneously existing context which is not the current one. As an operation to produce such a value, Guenther introduced a hypothetical logical operator called the "transjunction". This is any operator which, when "given" two different values returns a third value. Guenther derived this using a rather unusual method, which I will try to summarize at the end of this posting.
    First of all I wish to say how the idea (which I called "simultaneous translation" for lack of a better term) was shown in his paper.
    He refers to the famous paper by von Foerster on "Self-organising Systems and their Environments" (from a conference in 1960) where an experiment with magnets in a hidden box is carried out. The box is shaken and when it is opened there suddenly appears this beautiful sculpture! In a sense, the hidden "order" of each individual magnet expresses itself in a totally unexpected way when the magnets are allowed to interact freely. (This is the "noise" which was added by the shaking).
    I think it was von Foerster who called the phenomenon "order from noise" and Guenther reinterpreted it as "order from (order + disorder)" to make more explicit the two simultaneously existing "systems" (one external and one internal). The internal "order" (the magnetic poles of each magnet) which is hidden to the outside observer, expresses itself simultaneously as a "work of art" for the external observer.

    Transjunction as a hypothetical operator:
    Effectively Guenther described this operator as a simultaneous logical connection between two systems. His argument was that a formalisation of this principle may be possible and if this were so it may become "computable". He came to this conclusion by a rather unusual manipulation of truth tables (I didn't take it seriously at first). I will try to summarize it here.
    Instead of using classical "truth" values such as 0 and 1 (or F and T), Guenther introduced the concept of "kenogram". That is an "empty slot" which indicates only that it should contain a value the same or different from the neighbouring slots - without referring to the value itself.
    A truth table can then be converted into columns of abstract patterns. We start by taking into account the value patterns (for example, TFFF is the pattern for AND). With two values there are 16 value-patterns.
    If we ignore the values themselves and consider only the "slots", this reduces to 8 (because then TFFF is equivalent to FTTT, i.e. *+++). Such abstract patterns (which merely reflect distinctions - not positive identifications) are called "morphograms". (Actually I think in some of his papers he used digits but that does not matter).
    Guenther then came to a rather astonishing conclusion: 2-valued logic is morphogrammatically incomplete. This is because the 8 patterns do not exhaust all the possible combinations. One can add two new symbols which indicate "foreign" values, leading to 7 new patterns. These patterns define a class of hypothetical operators called "transjunctions".
    (For example, *^^+ corresponds to a "complete" transjunction because it always "rejects" both values when they are different, i.e. ** gives *, *+ gives ^, +* gives ^ and ++ gives +)."Partial" transjunctions only sometimes reject both values. The last combination is *^v+, i.e. two "foreign values" are introduced.
    Incidentally this has nothing to do with fuzzy logic which deals with values BETWEEN 0 and 1 WITHIN the one system. Other systems (simultaneously valid contexts) are not taken into account at all in fuzzy logic.
    The above only deals with 2 variables and binary operators. In later papers Guenther generalized this scheme for n-ary operators and n variables. In this way more general patterns of distinction are derived known as "kenogrammatics" (where the original morphograms are a special case). The "multi-system" logic itself came later to be known as the "polycontextural" logic with multiple negations as well as transjunctions.
    There is no successful formalisation as yet.
    I know that this all sounds totally crazy on a first reading. For that reason Guenther's work tends to be ignored or not taken seriously. I had to read the paper several times before I came to the conclusion that there was some deep sense in it. It is interesting that Rosen seems to have similar ideas developed from a different perspective.
    Many problems remain with the hypothetical "transjunction". One of the most important is the linguistic problem. How does one specify a system with transjunctions? All the existing logical operators (and formal systems for that matter) are based on natural language. When we are making statements we are always talking within a system (or we change systems sequentially). How do we talk *about* the simultaneous interwovenness of systems? To solve this problem Guenther is said to have proposed the idea of a "negative language", but I'm not sure exactly what he meant by it."

    Comment
    The email was sent to the Principia Cybernetica mailing-list of the Institut für Kybernetik und Systemtheorie==Institute for Cybernetics and System theory (ICS) of the department informatics of the Technical University of Dresden, F.R.Germany.

    The reason for this negative language is explained in the document titled "Derrida's Machines", which is quoted below/above, and also discussed in relation to the Arrow System and the arrows, the Robinson diagram, and the sequence c of distinct new constant symbols, also called negative diagram by the author of the Arrow System, and our Evoos and OS and the fractal and the self-similarity.
    Simply said, it is the complement of every formal thing, which grounds a symbol or gives meaning to a syntax respectively creates a semantic, or makes a distinction respectively an identity, and hence does not allow proemiality, polycontextuality, subjectivity, and so on. We could also call it formal language vs. dark language in the sense of matter vs. dark matter.

    We also came to the same view and conclusion in relation to the works of G. Güther, the Arrow System, and the Robinson diagram, linguistic problem, Symbol Grounding Problem (SGP), Natural Language Processing (NLP) and Natural Language Understanding (NLU), and so on.
    But we have a different point of view in relation to Fuzzy Logic (FL) in particular, maybe because FL was not utilized in such a creative way as we already did at that time, specifically in relation to the fields of Distributed System (DS), Holonic Agent System (HAS), and Multi-Agent System (MAS), and when we went further with the matter in general, as shown in this Clarification.
    In relation to the first point see also the quote and comment to the document titled "Derrida's Machines Part I [] The new scene of AI: Cognitive Systems?", where the student/researcher came to our Evoos in relation to her doctoral thesis or dissertation, and the so-called Grand Challenge based on The Proposal of C.S.. This proves once again that our Evoos is an original and unique masterpiece of the fields of Cybernetics and Bionics (AI, ML, ANN, CI, SI, CV, CAS, EC, SW, etc.) that was the source of inspiration and used as a blueprint, but not any other document publicated in 1999.

    The big mistake of G. Güther and Co. is to not understand that

  • any sign needs a symbol grounding even the * ^ + v, and so on, and
  • one cannot explain morphogrammatics and kenogrammatics without using signs or symbols respectively semiotics.

    It is exactly like with the electron, one must touch it to decide both, when and where it is. The same is with the universe to decide time and location.
    At this point we always begin to talk about the length of a description and its complexity in relation to AIT.

    [to be continued]

    We quote a document, which is about the field of PolyContextural Logics (PCL) and was publicated in the year 1996: "Introducing and Modeling Polycontextural Logics
    Abstract
    Gotthard Günther introduced the proemial relationship (PRS) as one of the basic transclassical concepts of polycontexturality. PRS pre-faces and constitutes as the mechanism of the difference making 'difference' all relational and operational orders. The present paper developes a first step modelisation of the proemial-relationship in analogy to graph-reduction based implementations of functional languages. A proemialcombinator, PR, is designed and implemented, which is proposed as an extension of functional programming languages and as an implementation technique for processcommunication and computational reflection.

    Introducing and Modeling Polycontextural Logics
    The idea of an extension of classical logic to cover simultaneously active ontological locations was introduced by Gotthard Günther [...]. The ideas of Polycontextural Logic originate from Günthers study of Hegel, Schelling and the foundation of cybernetics in cooperation with Warren St. McCulloch [...]. His aim was to develop a philosophical theory and mathematics of dialectics and self-refential systems, a cybernetic theory of subjectivity as an interplay of cognition and volition.
    Polycontextural Logic is a many-system logic, a dissemination of logics, in which the classical logic systems (called contextures) are enabled to interplay with each other, resulting in a complexity which is structuraly different from the sum of its components [...]. Although introduced historicaly as an interpretation of many valued logics, polycontextural logic does not fall into the category of fuzzy or continous logics or other deviant logics. Polycontextural logics offers new formal concepts such as multinegational and transjunctional operators.
    The world has infinitely many logical places, and it is representable by a two-valued system of logic in each of the places, when viewed isolately. However, a coexistence, a heterarchy of such places can only be described by the proemial relationship in a polycontextural logical system. We shall call this relation according to Günther the proemial relationship, for it prefaces the difference between relator and relatum of any relationship as such. 'Thus the proemial relationship provides a deeper foundation of logic and mathematics as an abstract potential from which the classic relations and operations emerge.
    The proemial relationship rules the mechanism of distribution and mediation of formal systems (logics and arithmetics), as developed by the theory of polycontexturality. This relationship was characterised as the simultaneous interdependence of order and exchange relations between objects of different logical levels.
    According to Günther ([Gun80b [Cognition and Volition. A Contribution to a Cybernetic Theory of Subjectivity. In: [Gun80a]]], [...]): The proemial relationship belongs to the level of the kenogrammatic structure because it is a mere potential which will become an actual relation only as either symmetrical exchange relation or non-symmetrical ordered relation. It has one thing in common with the classic symmetrical exchange relation, namely, what is a relator may become a relatum and what was a relatum may become a relator. [...]
    The proemial relationship implies the simultaneous distribution of the same object over several logıical levels, which is not covered by classical theories of types. In the following, a concept of such a coexistence and parallelism will be developed which models the kenogrammatic proemial relationship ([Gun80a [Beiträge zur Grundlegung einer operationsfähigen Dialektik. 1976-1980]], [Gun80b [Cognition and Volition. A Contribution to a Cybernetic Theory of Subjectivity. In: [Gun80a]]]).
    Due to the special properties of the proemial relationship and the limitations of classical calculi, an algebraic representation of the proemial relationship must be self-referential, i.e. in classical formalisms it has a paradoxical and antinomic structure. Because of these fundamental difficulties with its formalisation, an attempt will be made here to develop an operational model of the proemial relationship.
    [...]
    Thus the concept of proemiality is not a concept of the logic of relations (Peirce, Schröder) but prefaces - like the differance (Derrida) - all concepts of relations as such [Kae95 [Proömik und Disseminatorik. I. Abbreviaturen transklassischen Denkens, II. Operationale Modellierung der Proemialrelation. In: Jahrbuch für Selbstorganisation Bd.5: Realitäten und Rationalitäten. 1995]]
    [...]

    Implementation of the Proemial Combinator PR
    Based on the fundamental idea of the sameness of semiotic processes within kenogrammatics, the operational semantics of the proemial combinator PR(Ri+1, Ri, xi, xi-1) can now be determined by means of the operational semantics of a virtual combinator machine ([...])
    This model makes use of the homogeneity of programs and data of the graph representation for the combinator machine. In this way, a certain node z, which is realised as a physical object at a particular store address, can serve both as an operator and as an operand within different application nodes.
    Due to the parallel architecture of the combinatormachine, this exchange of roles (Operator <=> Operand) within the same node z can be executed simultaneously.
    Ri+1, Ri, xi, xi-1 can be arbitrary nodes of the combinator graph. The order relation of the proemial relationship, →, represents here the application app(rator,rand), which always guarantees a unique distinction between operator and operand: [...]
    [...]

    Application Possibilities
    Meta-level Architectures and Reflective Programing
    Under the key words Computational Reflection (CR) and Meta-level-Architectures in fundamental computing research, attempts are made to extend the classical concept of computation, for example as it is formulated in the A-Calculus. In particular, the problem concerns the development of computation systems which "reflect' over their computations.
    According to Maes ([Mae88 [Meta-Level Architectures and Reflection. 1988]], [...]; [Smi82 [Reflection and Semantics in a Procedural Language. 1982]]) a reflective programming language has the property that it explicitly makes methods available for reflective computation.
    In concrete this means that:
    1. [...] (metacomputation).
    2. [...] (object computation). [...]
    In such a system, representations of computation instructions can either be evaluated as a program on the object computation level, or alternatively (for example, in an error situation) they can serve as the data of a meta-computation level which could, for example, correct the error.
    [...]
    The distinction between the program (operator) and data (operand) within the one computation level corresponds to the order relation of the proemial relationship.
    It follows that the structure schema of a reflective computation system corresponds exactly to that of the proemial relationship, which is not the same as (Eigen)-behavior [Foe76 [Objects: Tokens for (Eigen)-behaviors. 1976]].
    [...]
    The proemial combinator PR is therefore suitable for the modelling of reflective systems in the sense of Maes's definition.
    In existing reflective systems (e.g. 3LISP[Sim82]) the meta and object levels are not realised as simultaneous processes, but instead execute purely sequentially.
    [...]
    [...] at any particular point in time, the whole computation will either be evaluated on the meta-level or on the object level. It is always uniquely determined whether an instruction serves as program (operator) or as data (operand).
    In contrast, the proemial combinator PR, along with the programs based on it, enables the szmultaneous [simultaneous] coupling of object- and meta-computation.
    In this way PR offers a parallel modelling concept for reflective systems.

    Generalisation of the Concept of Parallelism
    The definition of the proemial combinator PR is based on the physical coupling of parallel computations. This modelling approach will now be extended to a kenogrammatic notation for parallel processes.
    [...]
    Kenogrammatics describes a pre-semiotic domain in which the law of graphemic identity does not govern. Kenogrammatics relates to polycontextural systems as formal semiotics to classical calculi and embraces semiotics itself.
    [...]
    Physical coupling and interaction between processes can then generally be formally represented by kenogrammatic operations.
    [...]

    Prospects
    In the model developed here, the transclassical aspects of the proemial relationship occur only (as shown) from the perspective of a particular interpretation of the proemial combinator PR as an emergent surface phenomenon. It does not belong to the architecture of the combinator machine as an inherent feature.
    It may be said therefore, that the approach given here is not a transclassical model, but instead only a particular application and interpretation of a classical formalism.
    This restriction must necessarily apply, since the model is formulated within the linguistic framework of classical formal systems and programming languages (ML, HASKELL) i.e. positive languages. [...]
    A possible next step would to develop a new complete programming language for the computation model or to integrate it within existing systems. These programming languages could then be used for the implementation of coupled parallelism (in particular polycontextural logics and arithmetics, self-referential, heterarchical and autopoietic systems) [Kae88 [Again Computers and the Brain. 1988]].
    By means of such functional languages which would only require a few extended constructs, it would be possible to develop formal models of process communication and interaction of structurally complex systems (e.g. operating systems and artificial living systems).
    The fundamental barrier to the representation of the proemial relationship lies in the concepts of object, symbol and identity in classical semiotics and all positive linguistic symbolic systems based on it (formal, algorithmic and autological systems).
    [...]"

    Comment
    The fields of Cybernetics and Artificial Intelligence (AI) were totally exotic and even some kind of esoteric fields and career killers at that time (see so-called AI winter). Now, both fields work as well after we took the helm, as seen with mobile computing, electric vehicles, and much more before.

    We only learned some weeks after the first publication of The Proposal about PCL, because we were looking for something, which is pre-semiotic, and found kenogrammatics. If we would have known the quoted document at that time, then we would have referenced it, for sure.
    We found PCL in 1999, when looking for something like kenogrammatics and morphogrammatics, and saved the quoted document titled "Introducing and modeling in PCL" in January 2000. But we already had Dos (e.g. TUNES OS) with meta-level structure (e.g. Aperion (Apertos (Muse))), MAS, MABR, but communication and messaging are not sufficient and therefore interaction is required

    Interactive Turing Machine (ITM) respectively Turing Machine with interactivity, Interaction Machine (IM) or Turing Machine with Input and Output (TMIO), reflective distributed Multi-Agent System (MAS), and Holonic Agent System (HAS) with its basic properties for or capabilities of

  • communication and messaging,
  • negotiation,
  • interaction (e.g. Multi-Agent Belief Revision (MABR), High Performance Computing (HPC), and much more), as well as
  • reflection

    already does the job of subjectivity respectively polycontexturality and much more.
    We even have HAS with ontology as identifying structures, cybernetic logics (including PCL), and holologic. We have subgraphs or hypersets, semantic subnetworks or subgraphs, or subontologies as identifying structures, and agent-orientation in addition to object orientation with the agents in their individual situations.
    The first simple implication is that polycontexturality is simply inherent to the Evoos Architecture, or said in other words, it is a polylogical system and even an ontological system as a side effect of its design.
    The second simple implication is that the so-called Dynamic Semantic Web (DSW) is simply inherent to the Evoos Architecture (EosA).
    And these properties are what the plagiarists and the more criminal entities have understood only in 2004, when we understood at the same time how original and powerful our Evoos already is.
    Another implication is that utilization of semantic structures of knowledge representation formalisms like for example Conceptual Graph (CG), Semantic Net (SN), etc. leads to graph-based computing (see for example Cogitant based on the Conceptual Graph Model (CGM))

    We come from comics and painting, and also Visual Programming (VP), and two of many possibilities for VP are pictogram (set)-based programming, and rule-oriented programming with graphs or graph grammars (see for example the book "Visuelle Programmierung==Visual Programming" publicated in 1998, specifically the chapters Regelorientierte VP-Systeme==Rule-oriented VP Systems (e.g. Progres) and Multiparadigmenorientierte VP-Systeme==Multiparadigms-oriented VP Systems).
    So we have SN, CG, RDF graph, and other types of semantic knowledge representation and we simply need to integrate respectively fusion semantic knowledge representation and (semantic) graph-based programming (e.g. rule-oriented in general and graph grammar-based in particular (e.g. PROgramming with Graph REwriting Systems (PROGRES)) to close the loop, world, or universe conceptually.

    So we have a Conceptual Graph (CG), including Simple CG (SG), Semantic Network (SN), Topic Map (TM), Resource Description Framework (RDF) Graph (RDFG), Web Ontology Language (OWL) Graph (OWLG), Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), etc. as semantic structure or knowledge representation formalism and we simply need semantic graph-based programming (e.g. with rule-oriented in general and graph grammars in particular (e.g. PROGRES)) to close the loop respectively world or universe.

    The proemial relationship can be drawn as 2 nodes interpreted as 0 and 1, and connected by 2 directed arrows respectively a pair of directed arrows with the one arrow going from 0 to 1 and the other arrow going from 1 to 0. The resulting graph shows at the same instant of time the simultaneous relationship of two slots, two signs, and relator 1 and relatum 0, and relator 0 and relatum 1.
    By graph grammars and graph rewriting, which can be implemented on the basis of an Abstract Machine (AM) or Virtual Machine (VM), we can do with this elementary graph whatever we want, specifically we can copy it, connect it with other nodes by other pairs of directed arrows, interprete a pair of directed arrows as node respectively, relator or relatum, and so on.
    We also can begin a heterarchie by using multiple of this elementary graph.
    Also note that a graph can be serialized in a textual form and the "Nonreflective Description of the Reflective Tower".
    Everything required is available and formalized by ordinary two-valued logics or many-valued logics.

    We also concluded that a three-valued logic is sufficient, as we also explained in relation to fuzzy logic classified as non-classical logic and included in Soft Computing (SC) and Computational Intelligence (CI) (see the Clarification of the 14th of May 2016 and 8th of July 2016, the note OS too large to steal of the 15th of May 2016, the Investigations::AI, and Knowledge management16th of May 2017, the Clarification of the 23rd of August 2017 and and also the section Pure Rationality on the webpage Terms of the 21st Century of the website of OntoLinux), with or as part of a graph-based system (see for example the book "Visuelle Programming" publicated in 1998, specifically the chapters Regelorientierte VP-Systeme==Rule-oriented VP Systmes (e.g. Progres) and Multiparadigmenorientierte VP-Syteme==Multiparadigms-oriented VP Systems).

    The Arrow System of the TUNES OS, which both are referenced on the webpage Links to Software, describes an ontological frame, an ontological relativism, and a graph-based approach, which also has such or similar properties, including reflection and interaction, as discussed in relation to PCL and our Evoos.
    But ...

    As in the case of Agent-Based System (ABS), we also noted a lack of knowledge about the field of operating system (os) in the field of cybernetics.

    Howsoever, the statement in relation to the fields of operating system (os) and Artificial Life (AL) is about the possibility to "develop formal models of process communication and interaction".
    But no

  • Multi-Agent System (MAS) with communication and messaging based on Agent Communication Language (ACL),
  • ontology,
  • Holonic Agent System (HAS),
  • Resource-Oriented technologies (ROx),
  • Mixed Reality (MR),
  • etc..

    Indeed, the Arrow System was also publicatd in 1999, but despite the same philosophical, logical, and cybernetical foundations its description does not mention kenogrammatics and PCL, though it discussed the issue that "agents may find models within such a system mutually contradictory" as part of a certain relativity in an ontological frame. We also call this issue subjectivity (in the universe of discussion).
    In contrast, our Evoos is also reflective, distributed, and interactive, and also has an architecture and is based on cybernetic logics, which implies that it has everything of PCL somehow (see also the quote of the work "Derrida's Machines [...]" below).

    Even better, this obviously already fits perfectly with the stack of the static Semantic (World Wide) Web (SWWW) and our dynamization of such a worldwide network system.

    Furthermore and as we already mentioned, one of the authors has tried to steal our Evoos in 2004 together with the companies SAP and Ontoprise (fake company only established to block C.S. and our corporation; since 2012 done and sold) as the Dynamic Semantic Web (DSW). [to be continued]

    We quote a document, which is related to a reflective cybernetic system, seems to be created in its draft version 8 on the 24th of April 1999, and was publicated until November 1999 respectively before December 1999: "The Arrow System
    Abstract
    This proposal introduces a unified system for computations based on a cybernetic theory introduced here as model-level reflection. [...] The system transcends the limitations of state-of-the-art reflection systems due to the current restricted notion of meta-systems. The system can represent any intuitive concept, and can manage the interactions among arbitrary domains of knowledge. Because of these properties, the user may delegate arbitrary aspects of its own operation and management to itself, with reliable and reusable results. The nature of the system is ideal for computations in a unified field of knowledge, and as such provides an ideal basis for the migration of knowledge to new contexts and uses.

    Introduction
    The Arrow system is a way for computing devices to model and manipulate collections of information gained from the world, up to and including the higher-order reasoning structures of which humans are capable. The goal for the system is to promote complete freedom in the sharing of useful information among people and machines. The technical proposal to achieve this goal is a homo-iconic fine-structure intensional information-processing system intended to support model-level reflection in a clean and useful manner. The means to this goal is a central logical construct with as little semantics implicitly specified as possible, in order to allow the system the greatest possible freedom in reflection on the state of information obtained. This single primitive allows a great freedom in modeling capabilities for the system's knowledge base. Because the system is homo-iconic, this central construct's semantic capabilities extend to yield more freedom in reflection upon the system's state and dynamics. This core of a model-level reflective system has ideal properties for a unified computation system with vast reflective abilities. To place this proposal within the space of software types, one should accurately identify it as offering more than an information database, but less than full artificial intelligence.
    The Arrow paradigm is actually a simple analytical process involving the identification of binary ordered relationships at the level of individual logical atoms from any domain conceivable. [...] From a mathematical perspective, the arrow world in terms of elementary model theory is a system for managing the Robinson diagrams and positive diagrams of all models that agents deem useful for a knowledge system. The subject of the Arrow system is an extension of this concept of a Robinson diagram abstracted over the symbols that form the logical basis for these diagrams. The extension involves the integration of the logical theory provided by a model and the rules and symbols of the model of the system of logic that supports that theory. This allows a semantics of declaration and inference that vary from category theory to state-machine algebras.
    [...]

    MetaText
    [...] Though throughout the progression of the arguments the reader could consider the words "man" and "machine" as interchangeable, this viewpoint does not mean to infer that such concepts are identical from the perspective of any domain other than cybernetics. In addition, the word "system" denotes a cybernetic entity, consisting of a mechanism for managing information. Such entities usually involve an incomplete union between man and machine. A utilitarian goal in this light is to maximize freedom for people involved in such systems by removing the unnecessary information overload inherent in present-day systems. Another goal is to promote information freedom at all levels. In this way, not only does the argument seek to enable information access for humans; it also seeks to expand the amount and type of relevant information available to the reasoning powers of computing systems. Such availability increases the capacity of the system to be useful.
    This paper also includes some concepts only recently elaborated upon by computer science researchers from around the world. Since the intent here is to introduce new concepts, a significant part of the argument will clarify those ideas to support understanding of the system's design. [...]

    Reflection
    Basic Reflection

    Definition
    [...]

    Related Terms
    The argument considers the kernel of a system given to be non-reflective as first-order or first-class. This is the object of "discussion" for any reflective system or meta-system.
    When the system reifies models of itself for analysis, it performs introspection. [...]
    The meta-system, in traditional systems with logical models, is a cleanly separated system whose domain is the first-order system. Its results are available to the manipulator of the first-order system for evaluating and modifying the first-order system.
    A homo-iconic system consists of structures built from a single type of construct. All atoms within the system have identical implicit semantics. [...]
    A virtual machine for a given system is actually a formally defined interface between that system and another one. Alternately, an algebra for an identifiable interface constitutes an operational description of a virtual machine. Algebras involve operators, combinators, functions, and constants. A more neutral term is the mathematical concept of a state-machine. [...]
    [...] reification [...]
    The loop of reflection for a given system consists of a closed path of information flow regarding that system. This flow extends outward from the system into some collection of meta-systems. A human-concerned philosophy expresses interest in that part of the flow that a person receives, digests, and responds to by returning information to the system in question. Cybernetics considers many loops of reflection to apply to a given domain, because a domain may have many interpretations.

    Purpose
    [...]
    Reflection as an action should enable the system to encompass the parts of the manipulator of the first-order system that it can. As a transition for system contexts, it should allow complete freedom in such manipulations. It follows that the most useful type of system would encompass all the transitions that it can rationalize into its domain for reflection. It should also allow the greatest possible capacity for modeling in order to maximize its ability to understand these actions.

    Example Domains [of Reflection]
    Human Thought
    A person, via reflection, can think about a thing that he or she does, given a suitable means of expressing an ontology for the situation in which the given action occurs. This is the applicable meaning of the term within the English language. It is generally accepted, for instance, that thought itself falls under the domain of actions, and that it is reasonable for a person to reflect on such actions as well.
    Computer Programming
    Traditionally, to program a computer is to specify a structured collection of posited formal statements in some statically specified domain.
    This is the traditional model of formal specification for a human-computer relationship. This general concept models almost every human-computer interaction specification. Cybernetics characterizes the vast majority of present computer systems species as having a collection of such domains created for them. The meta-structure for such domains specifies their interaction. This meta-structure is the subject of programming reflection. The reflective part of current programming systems consists in procedures that manipulate the current state of running programs.

    Model-Level Reflection
    Definition
    Model-level reflection is reflection on the state and structure of a system at a higher level of semantics. Model-level reflection considers ontologies and their effects for basic reflection and circumscription upon the domain, which includes its own information, knowledge, and processes. Therefore, this scheme constitutes an algebra for basic reflection and circumscription via ontologies. Within this system, there are several terms by which to discuss the action in question.
    The definition of model-level reflection relates to the concept of the knowledge-level in current artificial intelligence research. The structure of a knowledge-level description [...] is as follows. Agents elaborate on an information state over which they have complete access and control. Models describe the knowledge used or produced by the agent while performing its tasks. Models exist as first-order entities to describe domains. These models form the agent's current state of knowledge. These models would contain the concepts whose interactions would form the structure of a language for describing various situations. The agent may have multiple models per domain and multiple domains per model. Methods are the means that the agent has available for modifying its knowledge state. Tasks describe the agent's goals and their structure.
    The intent of this structure is to provide a useful categorization for the functions of a system and their relations to other systems. Particularly, an agent represents a locus of action for a system to analyze. Models are obviously the gained state of information due to the actions of agents. Methods describe the building blocks and combinators available with which the agent may act. Tasks describe the set of constraints and axioms expected of the system by the various systems with which it interacts, notably the hardware of its implementation, other software systems, and humans and their organizations. This ontology provides the reflective system with a high-level model of itself for manipulation that respects the integrity of information gained.
    Since the original intended context for this concept is that of knowledge engineering, or its extension, artificial intelligence, some translation is necessary to relate these terms to the concepts of programming. Knowledgelevel description should be inherently concerned with neither the representation of the agent's knowledge nor the concrete mechanism used to perform the tasks. However, both subjects should certainly be part of its domain for a system addressing overall utility for its users. Such is the subject of execution-level reflection, which is naturally not in the user's best interest upon which to focus.
    [...] Models as such are arbitrary to construct, since several sets of symbols may be equivalent in their expressive power. [...] The present argument takes an expanded interpretation of the model notion, including not only the symbols subject to axioms, but also the model for the inference system that provides the Robinson or positive diagram itself. [...] the result is a powerful algebra of context shifts in terms of inference systems and functions. [...]
    Within the usual reflective systems, the system holds a model of itself, and the user and system manipulates the system's state according to that model. In contrast, model-level reflective systems may dynamically instantiate arbitrary models for arbitrary structures within the system or without of it, and provide first-order identification of those models with their intended subjects. To elaborate, a model-level reflective system may be seen as a system of abstractions for reasoning about the world, wherein the system has an effective means for applying models to its own structure in arbitrary ways. It should also be able to enforce the constraints desired for itself based on reasoning enabled by those models.
    The model level of a domain, in this light, recognizes the universality of model theory, and therefore applies the model metaphor to all the elements of its domain. For instance, the agent's knowledge proper is subject to modeling (as it also is at the knowledge level) within a system that reflects at this level. However, the tasks, the methods, the models themselves and representations of the agent are also available for analysis and modification according to semantics provided by the system. In order to maximize utility, the system should include within its domain the actual implementation of the system on a given set of hardware devices and the management of interface with system users. It should also provide reliable tools for verification of all implementation and communication decisions made by the system and the user.
    Informally, this discussion takes knowledge to be the closure of an information structure with an 'intelligent' context provided by an underlying system, as symbolically provided by the presence of agents. We attribute knowledge to the agents by observing their actions; an agent "knows" something if it acts as if it had the information and is acting rationally to achieve its goals. The intelligence of the context is such that it can understand the effects of information updates on the knowledge that the system manages. Such structures as inference systems, type systems, and ontologies provide these effects. A model-level system therefore maintains such a database of knowledge derived from received information. A desirable property of this maintenance is for the system to manage consistency reliably within that knowledge database. There are two basic approaches to this question of consistency. The system may maintain a single consistent state wherein the new information and the current task determine the effect on the knowledge structure. Otherwise, the system maintains multiple states of information under an algebra of states wherein the new information always provides an information transition from the old set of states to a new one in a decidable manner. The latter approach results in a more complete system of information regarding the interactions of ontologies. A complete system in this regard is the right step towards model-level reflection.
    [...] Model-level reflection takes knowledge descriptions of a system and its substrate, the virtual machine, and integrates them into a coherent whole. While reflection may modify the state-machine that defines the virtual machine, model-level reflection observes and modifies the reasoning behind the choice of model. The model is not only the subject of model-level reflective analysis; it is also the medium for such operations.
    Model-level reflection in a homo-iconic system must derive from the same primitive as the building block for the first-order system in order to preserve the accessibility of information and knowledge.

    Related Terms
    A context is a function of a model that provides an environment for agents that fully supports that model. A context therefore provides all accessible information in terms of structures specified by that model. [...]
    An ontology is an explicit formal specification of a conceptualization, or a conceptual model for a domain. Specifically, ontologies are concerned with the objects, concepts, and relationships among them within some domain. In formal terms, an ontology is the statement of a logical theory. In terms of contexts, an ontology is associated with the space in which actors model domains in terms of the ontology, and so 'respect' the restrictions posited by the ontology concerning its objects. Informally, this argument will consider the ontology for a context to be a description of the objects that are available for discussion.
    An ontological frame denotes the formal model specified by a structure of ontologies, as well as the universe of discussion that it generates. Frames are structures of contexts, perhaps uncountable in size. The ontologies specified for the frame may crosscut each other in arbitrary ways, to allow the frame user to have the structure due to crosscutting available for study and first-order use. A model-level system develops its knowledge in terms of ontologies and their frames. Often, agents may find models within such a system mutually contradictory, and this is both permissible and desirable. [...]
    Ontologies are not absolute, as is evident from the need for translation schemes among the various human and computer languages. This suggests a concept of ontological relativism, which denotes the idea that the preference for an ontological frame is entirely relative to some base frame or structure of frames. Equivalently, no ontology can be inherently optimal for working with a specific domain. The overriding motto would be that "No knowledge is an island." All agents approach ontology constructions with some frame in mind. A naturally useful paradigm foresees the consequences of this method and adjusts it for the general relativity of ontologies and their frames. Considering the relativity of ontologies, then, the system should not consider a particular ontology or frame as an absolute reference, but instead a relative reference where the results achieved by the modeling by default apply locally unless proven otherwise.
    One may consider an ontology analogous to a set of definitions that apply to a specific domain within a dictionary for a human language. [...]
    This definition of an ontology suggests modeling specific ontologies as directed links between two domains. A source domain provides the information from which the user (or the system) builds the ontology within the description language of a target domain. Since agents operate within frames, or structures of contexts, it follows that there may exist multiple ontologies for a given domain within the current context, and that these ontologies may easily crosscut each other.
    [...] ontological commitments [...]
    [...]
    Ontological frames identify the agent from the knowledge perspective. In this way, a model-reflective system may represent the user as an agent, making logical guesses about the user ontology frame by the interactions provided. If the user exhibits contradictory ontologies, then the system will update the frame accordingly. In this way, an agent may study the interaction of these ontologies. This allows for two or more users to interact with the system via the same device with no secure identification made, while the interactions between their beliefs and desires are preserved and managed. It similarly provides for the use and analysis of a user history in tracking the beliefs and preferences of the user as system knowledge develops. The system provides all of these benefits through casting users, external software agents, and all other incoming information in terms of agents and their ontologies.
    Crosscutting of ontologies [...] An ideal model-reflective system provides the arbitrary crosscutting of ontologies to maximize information accessibility while allowing arbitrary ontologies for use in the desired domain. This essentially provides the system with definitions for the elements of a domain from various perspectives, so that the reasoning structures with access to the domain knowledge can analyze the system from the currently optimal perspective.
    [...] fine-structure system (a.k.a. fine-grained) [...]

    Purpose
    The intent of model-level reflection is essentially to preserve the information structure governing the relations between the first-order system and other domains in a semantically well-defined way. The function of the model level is to provide support for mappings from arbitrary ontologies to the domain for analysis. It follows that model-level reflection should provide general-purpose modeling capabilities for any domain, and that all models produced should be available for use by all interested parts of a system. [...]

    Example Domains [of Model-Level Reflection]
    Philosophy
    A natural generalization of the domain of human thought concerns the motives and ontologies that oneself and others use. This domain, once properly extended in its scope, constitutes a homo-iconic system of concepts. In this system, the person and language employed become transparent. That is, the system reduces them to the role of cybernetic medium while retaining their availability as objects of discussion.
    Information Systems
    The natural generalization of the domain of computer programming concerns the postulation and processing of information in general. [...]
    [...] The kernel of such a system is the system that maintains such an interface. [...]
    Orthogonal referential integrity and persistence of information become necessary properties in the management of the system in that they reduce the amount of redundant information necessarily processed by any element of the kernel. As the information-base grows, the need for such a management will become critical if the system is to be useful.
    Partial views within a large information system have historically been acceptable due to the complexity of system relationships. If the number of relationships between objects increases geometrically with object population, for instance, then the categorization of these relationships to filter them from views of a large system is necessary if these relationships are to be comprehensible. [...]
    Unity of the information system is essential for and identical to information accessibility. If there is a cost inherent in maintaining this unity, then that cost detracts from the overall utility of the system. More specifically, system unity does not infer a disregard for the relativity of ontologies, which would suggest the use of a root ontology from which to base specialized ontologies. Instead, it employs the understanding of this concept to allow for a far greater range of information translation ability, by providing paths of information translation that a hierarchy of ontologies would prohibit. By losing a fixed reference point, the system gains the ability to reflect upon its own information structures and domain models from various perspectives. The result is a far greater potential yield for the system in terms of its inference capabilities.
    Knowledge Systems
    The following statements outline the type of design intended for the Arrow system to support, considered as a naturally useful extension to the previous definition of an information system. The formalization of such systems has not occurred among the research community, so that these specifications are vague.
    Knowledge systems are those information systems whose purpose is to yield real-world modeling capabilities via free interaction with users. No system today preserves these properties in a manageable way. These systems are those intended to outlast their creators and the conventions they use. Present day knowledge systems require a vast array of human participation to propagate. [...]

    In this way, a knowledge system that functions non-monotonically in complete isolation from human attention constitutes artificial intelligence. Regarded in the terms of cybernetics, the question of artificial intelligence shifts from the issue of construction to that of mapping existing knowledge systems onto available hardware.

    Review of Existing Systems
    Stacks
    The basic popular context algebra consists of application of a transition over a hierarchy of contexts. These contexts represent the collection of model, method, and task into one form, resulting in a loss of model-level structure which is difficult to reverse. [...] The pertinent limitation of this system of logic is that downdating of information is not subject to constraints of high-level semantics. Primitive forms of this system include stack-based reasoning systems.

    Procedural
    [...]
    [...] Beta, a derivative language, generalizes procedures, objects, and exceptions as patterns in order to enhance further the structures that crosscut this hierarchy.
    A possible solution to this dilemma involves the method that file-system designers have used for quite some time. The availability of path references constitutes a tremendous extension to the ability of the system to re-use gained information. The path concept is a simple reification of the geometrical structure of the hierarchy, and, as such, constitutes a first-order expression of the system's context algebra. In this simple case, block-structure equals context-structure, so that the familiar operators from Unix may be re-used. In this way, the foreslash operator ("/") acts as a selector coupled with an identifier for the particular relative sub-context. The ellipses "." and ".." select the current context (self) and the parent context (parent), respectively. Concatenation of these operators and elements of the domain specification produces programs of context-selection, manipulation, and inspection via standard first-order procedures. Given the current apparent success of interfaces to storage based on this meta-system scheme, it seems amazing that no current large-scale procedural programming language totally integrates this scheme into its definition as a way of reifying context network topology.

    Functional
    The original homo-iconic language paradigm concerns the function application and the concatenation (currying) operators as fundamental and identical. Context-inclusion relations in the various flavors of Lisp have employed either run-time call or lexical inclusion.
    [...]
    [...] The properties of functional semantics relate to formal proof theory via the CurryHoward-deBruyn isomorphism, allowing for the simple formal determination of program semantics.
    Here, the functional metaphor extends the path metaphor from the simple hierarchy where block-inclusion equaled context-inclusion to a system wherein many other function application webs crosscut the unique block-structure hierarchy. The selection of a particular set of function applications constitutes the usual idea of a directed graph of data-flow where nodes model function applications and arrows model transitions in context due to those applications. Obviously, our modeling strategy shows that the current context-algebra is far richer than the procedural method, since identifier selection is no longer sufficient to identify a context-shift. Instead, context shifts also require agents to specify an atom of state as the function's argument, so that the selector is binary vice unary.

    Objects
    The object-orientation paradigm consists of a basic translation of the state-machine model into the model that localizes all permissions for context updates to the states of (usually) simpler machines. It consists of the definition of a clear virtual-machine creation vocabulary in terms of the substrate state-machine's operations. It is, therefore, a translation from the substrate machine into an environment of declared state-machines. [...]
    [...] current systems for handling objects in software cannot reify the concept of a binary symmetrical relation into a first-order entity whose model-level understanding agents may encode within the language.
    The object-orientation scheme provides context shifts via instantiation of objects by various methods, as well as by the semantics of the object's operations, whether procedural or functional. The aspect of this paradigm most lacking in semantical integrity concerns meta-instantiation, wherein new objects are instantiated concerning "lower" objects. This notion of meta-systems existing at various "levels" with respect to a base context forbids model-level reflection in meaningful ways. For example, if a modern digital computer is considered the base context, then the current model of meta-instantiation does not cleanly allow structures to be built dynamically from objects for first-order identification with the hardware as a state-machine. It follows that the loops of programming reflection are not available for reflection via modeling within the universe of objects and their interactions. Such a restriction on reflective capabilities limits the system domain from exploring various aspects of its operation and use.

    Declarations
    [...]
    [...] This model only realizes its full potential once the underlying model of the logic system is available for modification. For instance, a new set of logical paradigms introduced in the last two decades provides a structure linking standard predicate logic to the minimal modal logic. Such a structure includes domains like dynamic predicate logic, multi-modal logic, arrow logic, and information-transition logic that include subvariants exhibiting decidability while possessing a quite valuable power to express concepts. Using these logics as a basis, model-level reflection at all levels of the system results in a powerful system for managing information.

    Aspects
    A current advance in the theory of meta-programming is the notion of aspect separation. This approach formally recognizes that the above paradigms fail to singly capture many design issues with their provided ontologies for building knowledge systems. Consequently, if the system is to address these design issues, the system must provide a transformed specification of the ontology, so that elements that address these issues by crosscutting the original ontology become part of the specification orthogonally. The transformed system of information exists within several 'spaces', each of which shares a name-space and uses a unique virtual machine to specify relationships obtaining in the domain. [...] The integration of these specifications into a single unit specification tends to result in a loss of the meta-information that includes the domain ontology and the aspect ontologies. In traditional systems, the aspect methodology only occurs within the human mind, and so the computer has no access to this type of meta-information; the loss is therefore implicit and hence irreversible without considerable human assistance. Therefore, the more useful means to handling aspects is to cleanly separate the ontologies and define the means by which the system combines ontologies in order to produce results addressing all the issues without loss of system information. In current systems, the provision via human-directed development of orthogonally independent semantic models for the various aspect virtual machines addresses this issue of ontological separation.
    In this model of programming, the appropriate paradigm captures the natural ontology for the domain in question within a specification. This specification intentionally leaves unspecified issues that do not directly concern the knowledge level of the intended domain, such as storage management, execution speed, and communications constraints. Such issues should affect the translation of the domain ontology into working code. One or more aspect ontologies that address the process of translation between domains from orthogonally independent perspectives explicitly handle those issues.

    Intensionality
    [...] Intensionality guarantees referential integrity by replacing references to text identifiers with "direct" links (called hyperlinks). This referencing mechanism effectively raises efficiency of maintaining referential integrity within an orthogonally persistent system of relationships between code fragments. [...]
    [...] research in the field of visual programming, and, more specifically, direct manipulation concepts therein provides ample evidence of the conceptual simplifications for a computing system gained due to intensional interfaces. The operational behavior of such systems is often far simpler to understand and manipulate consequently.

    Incremental development
    Incremental development is the application of fine-structure analysis to the logic of the evolution of a knowledge system. The progress of an agent toward defined (perhaps evolving) goals tends to create a lot of information. [...]

    Abstraction
    [...]
    The classic algebra of abstractions, the lambda calculus as introduced by Alonzo Church, is a general system of functional specifications. While the notations of the lambda calculus involve the traditional variable-as-state model for computations, the calculus itself is an abstraction over the syntactic formalism that yields an intensional model. [...]
    [...] The lambda calculus is a formalism that identifies in its domain all finite recursively-definable functional structures. The practical significance of a formalism that can distinguish those algorithms of which a finite-state machine is capable cannot be over-estimated. However, this formalism simultaneously fails to express those algorithms of which modern machines are incapable. [...] Because of these limitations, the lambda calculus alone is insufficient for knowledge systems as described above, in that human-reachable structures will be quite unmanageable. (Notice that the goal is for the machine to assist the human-reachable tasks by managing the information involved, not actually attempting to perform such tasks as specified.) Even among less capable systems, it seems useful for the system to express to the user when it cannot handle some type of action directly. However, the computer system should make itself available for discussing various aspects of actions that it cannot complete, and to manage the information gained as knowledge.
    Note that the lambda calculus employs the intuitionist logical model for context structures. [...]
    As an algebra, it has difficulty expressing important human-level activities, such as instantaneous pattern recognition and conditioned responses. Moreover, it seems natural that a model-reflective computing system should understand the operations of what constitutes user-interface in modern graphical interfaces: the abstraction from a bitmapped display of information presented by the software system. If a system can prove that its graphical interface processor communicates information in a way intuitive to the user's ontology, then it can modify that interface in a way that preserves those properties.
    [...]
    Current systems lack the ability to express ontologies as first-order vocabularies for domains coherently and to allow for ontological crosscutting in a clean and safe manner.

    Primitives
    An overriding characteristic of current-generation software systems of any kind is the implementation of standard mathematical and logical structures as primitive objects. [...] However, the distinct inability of these systems to immerse the context in other mathematical or algebraic systems makes obvious their limitations for dealing with information structures in general. [...]
    To that end, software systems should enable agents to encapsulate arbitrary models of various formal objects. [...] The utility of the abstract form of the models is that such models translate to various hardware and software architectures in ways that the software can manage automatically.

    Finite size
    The intuitionist virtual machine's internal state can be characterized by a finite but unbounded stack and a single register for manipulating that stack with respect to a pool of random-access data known as the heap. [...]
    [...]

    Model-Level Properties
    Tasks for the system are states that agents modeled but cannot or have not constructed. Examples of this category of knowledge for a model-level reflective system include input from the user, the management of an infinitestate device, and all data from "unreasonable" sources. Traditional systems cannot manage the information provided by such domains, and therefore fail to overcome complete reliance on the user for much of the information necessary to construct and maintain the system. Hence, systems that cannot support general-purpose model-level reflection will not succeed in terms of overall utility for society.
    The scheme for implementing the notion of a model-level description of the system is as follows. The agent is the information system as a whole or taken as a consistent part. The agent's models consist of the interaction of collections of information atoms. The agent's methods include the functional semantics that the construct can provide. The agent's tasks are actually models of the desires of systems with which the reflective system interacts, but does not fully understand. The constructs used must be broad enough in scope to achieve this range of interpretability. [...] However, the distinct inability of these systems to immerse the context in other mathematical or algebraic systems makes obvious their limitations for dealing with information structures in general. [...]
    To that end, software systems should enable agents to encapsulate arbitrary models of various formal objects. [...] The utility of the abstract form of the models is that such models translate to various hardware and software architectures in ways that the software can manage automatically.

    The Proposal
    This paper intends to present a cybernetic system that fulfills all the stated requirements for an information system that provides system-wide model-level reflection, as well as the properties necessary to manage a complete knowledge system. As such, it should provide an excellent substrate for the development of artificial intelligence prototype systems, including expert systems and knowledge bases with a far greater utility than conventional systems. It also trivially supplies the means for a unified system for computing systems with high utility and an appropriate and adaptable conceptualization system. The goal is a unified system capable of modeling any concept, including those that reflect on the system domain. [...]

    Basic Metaphor
    A Binary Relation
    The system is a specification of atomic relationships (called arrows) between objects that are themselves arrows as well. The translation of the notion of a logical binary relation is therefore a set of arrows. This yields an expressive capability that easily rivals the relational algebras of mathematics. The system enables a function type by specifying a left-deterministic relation, allowing arrows to be composed into functions as well.
    This yields a constraint-based or axiomatic programming construct. The total set of constraints by other arrows that applies to a given arrow constitutes its definition. However, the constraint metaphor provides an arbitrary abstraction level, since any object that the system can postulate is available as an arrow for declaration of constraints.
    Although the system provides the metaphor of relational declaration, we can see that the world of arrows that an Arrow system will implement has a far greater capability than those standard systems of specification. For instance, within a complete information system built from arrows, each arrow is available for statements constructed by other collections of arrows. This property enables arrow models to build the definitions of objects constructively and incrementally.
    The system that does these things functions quite differently from ordinary computing systems. Its base semantic paradigm constitutes a reasonable theory of information atomicity in terms of cybernetics, although it is trivially recognizable that these information atoms are far from indivisible in the ordinary sense. As shall be shown, the ability to identify these atoms of information with arbitrarily posited patterns of other atoms results in the ability to crosscut ontologies in first-order ways that are often calculable. In this way, the system easily reifies, studies, and manipulates the mental process by which users ordinarily build information and knowledge systems.
    This lies in direct contrast with the de facto standard for information atomicity, the bit. The bit embodies the von Neumann state-space architecture rather succinctly, and as such is limited from many of the higher-order abstractions of which arrows are capable in a meaningful way. [...]

    A Homo-iconic System
    Any application of arrows to other arrows constitutes model-level reflection. The arrow construct itself forms the basis for reflection via object-level / context-level distinction. This common construct has such little semantics implicit to its definition. Therefore, though much information that agents would considered unreachable because it is sub-structural or crosscutting to the model specified by the semantics of ordinary languages; it is reified easily in this new system for first-order reuse. This allows the Arrow system to act as an effective meta-system language providing inter-language translation up to the limits of computability via a uniform modeling mechanism.
    Although the arrow construct may be intuitively seen to model transitions of information between languages, it also provides an additional metaphor, since the "nodes" representing the pair of languages (actually, state-machines) in the translation are actually arrows themselves. These node-like arrows again model another information translation, since a language represents an interface between systems.

    Not a Language
    The Arrow system blurs the distinction between context type and other types. Essentially, the system renders all types in terms of arrows. Only a context can yield meaning for a given arrow. Such a context is due to a type of abstract virtual machine built dynamically by the environment from the specifications of ontologies.
    A natural view for the Arrow system involves generalizing the notion of a hyperlink to include all invocations (and references) both above and below the first-order level of a given language. Therefore, all code consists of structures of hyperlinks, individually reified as arrows. The result is an abstract diagram that mirrors data-flow in its interpretation, since it links the first-order objects of an ontology (for, say, a programming language) to their invocation points. The structure relating the invocation points is an abstract version of a syntax graph, which again agents may easily construct from arrows.
    The semantics of an Arrow system specification will have a functional character itself (mirroring the intuitive interpretation of an arrow as an ordered pair specifying a function or a lambda term), to provide a clean way to produce correctness verification. If presented with a sequential representation of an arrow specification, the system should separate the information inherent in the sequential form, unless the agent states that the form contains information used by some part of the system.

    A Simple Construct

    The simple visual concept of an arrow is a directed link between two points. A formal metaphor for specifying arrows should consist of viewing arrows as data structures with exactly two slots that are ordered. [...]
    A formal metaphor for specifying arrows should consist of viewing arrows as data structures with exactly two slots that are ordered. [...]
    These slots can only contain references to other arrows, but a slot cannot contain a reference to another slot directly. [...]

    Graphs
    The natural ontology for viewing the Arrow system consists of collecting arrows into various groups. Some kinds of these groups are similar to directed multigraphs of arrows and nodes as in mathematical graph theory. Graphs are not intrinsically special types in the Arrow system, instead merely collections of arrows. As such general abstractions, they constitute a natural construct for reflection among arrows. In this way, the user can apply various abstraction schemes to collections of arrows as basic derivatives of set theory. Since arrows model declarations in traditional specification languages, this action can work at the model-level.
    In mathematics, a graph is a way to specify the relationships between selected objects viewed as nodes. In the Arrow system, the nodes are really other arrows within the system. Taking general relational algebra or category theory as the logical system, each graph corresponds to a relation. In this way, the natural theoretical view of a graph is as a set of arrows, per modern set theory in formal logic and mathematics. (To be accurate, the most natural view takes Fraenkel-Zermelo set theory to be the fundamental model for the system from the viewpoint of first-order logic.) [...] Graphs constitute reflective-order relations, since each arrow represents the application of a relation symbol to an information atom, and interactions between these relation invocations constitute a higher abstraction level for the relation. [...]
    [...]
    [...] Since our grouping mechanism is completely unrestricted, the ontologies that are possible in the system may crosscut each other, as well they should! [...]
    [...]
    Taking the relational metaphor as a basis for construction, the grouping mechanism can be modeled itself by a set of arrows. These arrows model atoms of information regarding the set-membership relation [...]. [...] The construct thereby enables arrows to contain references to graphs themselves. This concept generalizes to a scheme that can reify any concept as the node of a graph. To begin, imagine that for every graph there exists at least one meta-graph: a graph that describes the original, in terms of set theory in the present case. This graph maps the relation symbol to its invocations in the original graph; for each arrow in the graph, there corresponds exactly one arrow in the meta-graph, termed a meta-arrow. [...] Note that for each meta-graph, there exists a further meta-graph for that graph; notice that the resulting extensional structure is an infinite recursion process, which is often termed meta-regress in traditional systems. This process evidences the need for an axiom of infinity to conceptually distinguish those constructions that the system should implement lazily. [...]
    This is not to say that arrows must always be explicitly instantiated in order to construct a relation. [...] The existence of an arrow as a member of this axiomatically defined relation could be determined dynamically. [...] This technique is hardly novel. Many existing systems employ this idea of evaluating the structure of objects only on demand, known as a lazy evaluation strategy.
    [...] Furthermore, a graph within the system has very little meaning if it is 'bare'; that is, if its arrows are not annotated by some structure that develops the meaning of those arrows. As an example, consider figures one and two below, which depict the diagram for a state-machine and its syntax-level abstraction. The extension of the graph of the automaton to its syntax graph (reifying basic arrow-level relationships in the first diagram) represents a shift of information content from the text identifiers to arrow structure. The system extends this shifting process to encompass the information that is textual even in the syntax graph, until all the information necessary for the system to encompass semantics of the diagram exists in terms of arrows alone.

    Logic
    The logic of arrows as expressed here has only been analyzed in the last decade as part of the larger field of logical research that intends to explore the various types of logics between full first-order predicate logic and the less expressive, though decidable, minimal modal logic ([...] [Exploring Logical Dynamics. 1996.]). Arrow logic explores the notion of the definability of processes and information transitions in terms of these logical atoms. The logic of arrows is actually a family of logics whose atoms may or may not be associated with ordered pairs or tuples of points. Note that the Arrow systems as presented here fundamentally differs from the current subject in logic in that the underlying nodes in Arrow graphs are also arrows that are subject to inclusion by any graph in the system. This yields both homo-iconism and a clean mechanism for reflection by the system at all levels.
    The fundamental notions of arrow logic involve some simple predicates over featureless logical atoms called arrows. Just as in the Arrow system, these objects fundamentally represent specifications of binary relations. The three canonical predicates specify identity, inverse, and composition relations over those atoms. Arrows themselves may be thought of as corresponding to ordered pairs over an underlying space consisting of states (viewed as nodes) with the arrows representing the allowed transitions in the system. However, the idea of "pointless" arrows is just as valid a model. The fundamental notion of a collection of arrows, known as an arrow frame, corresponds precisely with the introduced notion of a graph in the Arrow system. The reason for the shift in terminology is to effect a different point of view on the utility of arrows and the modeling capabilities they engender, and to clarify the difference between ontological frames and arrow frames. Current research practices consider arrow frames in isolation, whereas the graph is intended to be an atomic module of information for a new class of high utility computing systems. As such, the primary focus in the design of these information systems involves the interactions between the graphs, since they constitute the refinement of information within the system until it approaches the level of knowledge.
    [...]
    As for arrows in particular, the fundamental relations considered of interest for the Arrow system to extract and describe information from arrow collections are the three canonical relations, the incidence relations, and the reflective relation, specified as follows.

    Canon
    [...]

    Incidence
    [...]

    Categories
    The mathematical notion of a category relies directly on the directed multigraph construct, just as the non-degenerate graph does in the Arrow system. [...]
    Categories assume a fundamental difference between the notion of arrow (as morphism) and node (as object), so that the theory of categories cleanly separarates the notion of system and meta-system by restricting categories from crossing that boundary. [...]
    [...] Furthermore, the Arrow system does not assume associativity of the composition relation, except with respect to identity arrows. However, both systems regard inversion as a subject for logical analysis. The main reason for the differences lies in the fundamental metaphor for the node concept between the two systems. Within categories, nodes describe strongly (or trivially) typed logical atoms, that is, that internal form of a node defines its meaning. The Arrow system, on the other hand, considers nodes to simply be the subjects of informational atoms, and to have no inherent meaning. The Arrow system constructs of meaning on its own.
    [...]

    Multi-Arrows
    Another concept for the Arrow system is the basic notion of chaining together arrows, providing arrows with arbitrary numbers of references (even infinite numbers). The basic metaphor for an N-reference arrow (or a multi-arrow or N-arrow) is a relation over N-place tuples of objects. [...] Paradoxically, the definition for multi-arrows may derive from the notion of a collection of arrows, over which a relation specifies a linear ordering. The duality of these viewpoints is desirable and exhibits the natural ability of the Arrow system to support ontological relativism in a simple manner. [...]
    As for implementation, any abstraction mechanism that exhibits the same characteristics as a multi-arrow will do. For instance, if the domain in question consists of graphs of Lisp-like linear lists of references, then it behaves as a multi-arrow environment. These lists are simple to construct given the isomorphism between the arrow primitive as an ordered pair of references to arrows and the Lisp construct of an ordered pair of pointers. [...]
    Note that this new context includes multi-arrows with both zero and one references. This allows for increased expressivity concerning the original context of arrows with two references only. A zero-reference arrow represents an effectively empty statement, true in all cases, while the single reference arrow reifies the references that arrows contain as a first-order atom.

    Contexts
    Arrows rely on variation in context for meaning. However, variation in context is not under the state-update von Neumann permissive model or its derivatives. The system may and shall provide the simple dynamic semantics of the functional paradigm as well as the clean semantics of declarational specification. In this way, an ontology specification introduced to the system results in the incremental modification of the current knowledge base until the state of the system realizes perceived ontology. Because of this, agents can manage the interaction of various ontologies in a manner heretofore unattainable.
    [...] the system may manipulate and understand any formal symbolic theory.
    Since we may build any conceivable single relation, we may assume that the interaction of these relations would produce a declaration-based system of description for first-order objects. In this way, an external definition of an object develops incrementally and constructively, using the same methodology as a theoretician would in formalizing the notion of a system.
    However, this metaphor has more power than the previous description suggests. Since the Arrow system possesses a very simple, clean "underlying layer" of semantics, agents need assume very little in the logical structure of the system's substrate. This allows the Arrow system to delve into areas of reflection previously unattainable by current programming and knowledge-modeling systems. For instance, within current systems, the system's designers enumerate the types of reflective actions that agents may take. The system therefore does not achieve the necessary level of reflection in order to overcome these inherent limitations; one can imagine a "reflection barrier" beyond which the capabilities of software systems could grow quickly with minimal human involvement. The Arrow system design intends to transcend this limitation in current reflective capabilities by fully supporting the principles of ontological relativism at a first-order level.
    Arrows create contexts constructively in our system. Since the primary metaphor for arrows is the declarative specification of a binary relation, it seems natural to use binary relations to specify the rules and axioms for a logical system. The remaining necessary construction consists of an evaluator that obeys those rules of inference and axioms of specification to yield results to a querying agent. [...]
    [...]
    A lack of restriction on the types of evaluators permitted to access ontologies should provide a wealth of semantics. For instance, one could consider a graph over ontologies, considering ontologies as nodes, and the arrows between the nodes as the evaluation methods between them. In this way, a system actor can sequentially combine evaluators to produce new evaluators for any ontology in the system. This notion is conceptual generalization of the mathematical notion of a category.
    << Insert ontology transition graph here. >>
    << Nodes: hardware machine-state language[s] (internal and peripherals in both state-space and time), ASCII text[s], assembly language[s], high-level languages, operating system or virtual machine state language[s], interface specification languages, graphics display interface languages, drawing primitive languages, user interaction ontology language[s], and more. >>
    << Arrows: interpretation, compilation, assembly, reverse engineering, and various representations. >>
    [...]
    Not only is the [ontology transition] graph and its implicit structure available for first-order analysis, but also the system may augment this graph by positing new ontologies or even splitting and joining ontologies in arbitrary ways.
    As can be seen, the generated system of abstraction does not promote the metaphor that it provides abstractions in arbitrary "layers" above some substrate state-machine. Instead, abstraction is now an action taken in arbitrary "directions". In the new system, the substrate state-machine is merely a new specification of an ontology, and evaluating some specification in terms of the state-machine ontology provides "execution". [...] Note, however, that currently the state of hardware research is rapidly exploring the possibilities of devices that perform calculations far beyond the conceptualizations of Church and Gödel. The data introduced by such devices are obviously untenable for analysis by current computing systems in a useful way, evidenced by the specialized nature of these devices' interfaces and the lack of a coherent theory for abstracting upon that data. It is for these devices as well as for humans that the Arrow system design intends to serve.
    [...] The vast majority of modern systems do not even appraoch the limits described here. Such systems also express those arrows that implement ontologies that are "more abstract" than the base set. The most general-purpose of these systems are the homo-iconic systems wherein those arrows extending from the central sources of abstraction constitute all possible information transformations. Computer scientists refer to such systems as exploratory in nature, in that they make available all ontologies to the user in meta-order form, via first-order specifications of implementation. The parts of the system labeled "reflective" constitute the system implementation. As such, the user necessarily performs part of the representation of the periphery ontology as he or she reads the code. Notice that the notion of such systems traditionally assumes the "star pattern" which this discussion elucidates, and that this pattern is isomorphic to the notion of a hierarchy. It therefore constitutes an explicitly limited domain of information management. The other possible information transitions are not available in this scheme, even if they represent computable translations of information. Furthermore, the system only makes available the static set of central abstraction ontologies for first-order use, that is, implicitly requiring that the system multiplex implementation through one or more of the central abstraction ontologies.
    [...]
    This formalism for expressing the contextual expressiveness of modern information systems suggests a proof (both cybernetic and topological) of their limitations in terms of utility due to their inability to modify the central static set of ontologies. Such a modification would thereby express ontological relativism at a system-wide level, and enable model-level reflection.
    [...] Constructive specification from within an existing context helps create new ontologies, and therefore contexts. Arrows represent the generated ontologies within the diagram, as suggested earlier by the definition of an ontology. Other arrows link the arrow in the ontology graph to the ontology's specification, contained within the target domain. Since ontologies represent processes of translation (or interpretation) of information in the source domain, multiple arrows may exist in this diagram per pair of contexts. In addition, an intuitive idea is the use of sequential composition of ontologies, which would provide a combinator for generating new ontologies via the analogy of vector composition. Such an operation represents the gluing together of ontologies to extend the accessibility of information among domains in an automatic way. [...]
    What results is a natural system for expressing domains and ontologies within the frame of graphs.

    Meaning
    Meaning describes the dynamic effect of information received relative to the current frame of knowledge. Similarly, truth within a system of logic is local to a knowledge frame.
    The means for creating an ontology within the Arrow system entails an understanding of the means for state machine construction within the Arrow system, to enable the understanding of the state machine concept within ontology descriptions. State in traditional systems implies that a certain valuation exists for the variables of the machine. Well-defined machines have variables whose allowed values satisfy certain constraints. [...] Note that the Arrow system design intends to model all of the conceptual levels of a system. By including the entire model of a type as a first-order object in the system, variable assignment consists of arrows that "select" the appropriate element from the structured set of possible values for the type. That set constitutes the extension of the type in the usual terminology. The means for predicating the type of an object is the use of an arrowstyle mapping of the kind just introduced from the object to the intensional (axiomatic) description of the type. The creation of a type system then enables the construction of arbitrary state machines, which formally specify a language identical to the state-space of the machine.
    From this beginning, the means for ontology construction consists of specifying a set of nodes that form the constructs for a contextual vocabulary. The relations that directly refer to a node of this set are those that help define it. The closure of specification due to relations that refer only to nodes in the argument set is the ontology of the set. From this method, an agent may create all kinds of specifications whose entities are available for use by any object in the system.
    Meaning is relative to context as well. The system handles this concept by means of ontological crosscutting. [...] Using this ability, the user can model a domain in any way he or she chooses and retain the ability to translate to any other model of any domain, including those of the same domain.
    Some definable systems have no inherent capability for a truth concept; such properties of those systems are as local as the meaning of the terms of those systems.
    Agents may compare the ontologies determined by frames. A simple metaphor for the inspection by the system of an ontology considers the state of a person's mind as it peruses a dictionary entry. As an analogy, the ontological frame would describe the collection of subjects in which the person is currently interested and the person's current state of knowledge concerning them, both of which are dynamic in nature. The references made by arrows in the graphs of the ontologies would mirror the words used in the definition statement for an entry. In addition, one knows from general-purpose dictionaries for common use that often multiple entries exist for the various uses of a word. The Arrow system mirrors this concept in a far cleaner manner, since such rules of term usage apply based on the context used by the speaker, with term meaning distributed among ontologies. In the Arrow system, agents make the choice of context constructively (whether explicit or implicit), and the terms of the ontologies that a context supports are available for use implicitly. In this way, our definitions for terms are those necessary for logical discussion of an argument, and gather into reusable units automatically.
    The only problem foreseeable in this metaphor is for those dictionary aspects that contain contradictory concepts, which in an ordinary language would be troublesome when agents combine the ontologies. However, this is really the old monster of name conflicts, which really only relates to extension-derived identity found in traditional systems. The Arrow system overcomes this by positing the existence of all resulting information spaces that result from the various types of combinations of the clashing ontologies.
    Now the Arrow system proposal relates to a dynamic, growing, general-purpose dictionary of concepts. The system naturally distributes a definition of an arrow in our dictionary, even across frames of knowledge. The definition extends to a far finer structure of information than does the traditional language, in that this dictionary via arrows describes issues that constitute language and linguistic context definition as well. A frame of knowledge of a domain in this light relates to a collection of ontologies, where the various aspects (as ontologies) of the dictionary specify the definitions of the subject matter. The aspects relate to form a whole structure in which agents may reason about the relations between these definitions and analyze the composite definitions formed.
    In order to present a specification to the user, the system collects the constraints specified for an arrow within the desired frame of ontologies identified with the vocabulary specified by the user. The natural system development spreads constraints across borders between explicit and implicit worlds. The system sees all constraints as equal, yielding reflection on context trivially. The interface filters out the implicit parts of the definition and produces a structure of specification that extends to the base vocabulary that the user wants. Dynamics
    The Arrow system as described above is static (or kinematic at best). It remains to define the dynamics of the Arrow system. The Arrow system is a general-purpose information system that expresses contexts, ontologies, and language models as first-order concepts. It may range over many logics as context modification systems. These properties enable a novel means of expressing system dynamics.
    As an information system, agents interpret all atoms within its domains primarily as atoms of information. The means for introducing information to the system include automated deduction and undecidable selection. [...] Some kinds of advanced reflective systems produce code themselves at a fine-grain level and in doing so transfer the user choice type of information to the meta-level. [...]
    [...] This discussion considers the natural evolution of information systems toward utility to necessarily include the migration of user choice information to system deduction capabilities, since such a tendency naturally obviates redundant human involvement in the management of the information system. [...] The overall result is to blur the distinction between user and programmer, to allow for a more equitable society concerning computations in general.

    User Interface
    To consider the application of the Arrow system to the issue of user-interface, one must consider the nature of the state of information of the system. The models within the system should transcend the usual context of a "snapshot in time" for the machine, so that it encapsulates the time predicates that the user wishes to enforce. In this way, the user's desires directly affect the usual notion of implementation as it relates to interactivity of the software system and efficiency.
    The system models the ontological frame that the user employs, and interacts with the user based on that group of ontologies. In model-level terms, the system speaks directly to the ontology, and merely passes the conversation along to the user for review. The user, in turn, acts for the ontology in communicating with the system.
    [...] the logics employed as well as the process for translating this information is subject to modeling, allowing the system direct control over its operation via reflection.
    Taking the graph over ontologies offered as an example for a basis [...]
    Given this basis, the Arrow system proves an ideal platform for the automation of software production in arbitrary ways, since it can model information transitions that include hardware-related issues. For example, various types of formal specifications easily describe the format and content of processor instructions and data formats. The Arrow system casts this type of information as an ontology. In addition, agents may reliably specify protocols for interacting with various hardware devices and external software programs, with possibly less precision, in the same way. In modern systems, such information is specialized so that access to that information is itemized. By providing a specification for an interface to an information device to the Arrow system, the user may then manage that device in arbitrary ways, since the description is available for sequential composition into new ontologies and structuring into frames.
    [...] As discussed, the primary purpose of the system is to translate information between contexts in order to promote overall system utility. [...]
    [...]
    [...] Assume, for instance, that the user makes a visual choice using a pointing device that corresponds to a standard graphical user interface (GUI). Within the Arrow system, the essential information update is the notification of the update of the mouse's hardware state-machine, which exists in time, space, and state-space. This information is immediately available for translation into the user-interface ontology (say, as a single-click with the mouse button at such and such coordinates and about this time). Traditional GUI programming models then translate the meaning of this event into an abstract piece of information within a context that is abstract with respect to the underlying hardware. At that point (at the application level of abstraction), the information atom is decidedly within the arrow space and available for reasoning structures; it still embodies user choice, but is available in the context of logical operations for modeling. [...]
    The total system's semantics are of a functional paradigm, but at an information level, not simply at an execution level. In this way, information transformations are the fundamental abstraction type at the user level for the system. [...]
    [...]
    The system can reference itself as a whole using self-similarity factorization of infinitary structures that result.
    Specification of programs: data-flow and control-flow graphs have axiomatic definitions, with parts being static or dynamic with respect to contexts
    Specification of verification: inference-flow graphs
    Show that the method of integration of these graphs, traditionally static in traditional languages, is now dynamic and first-order
    Context construction, thereby, and its character
    [...]

    Garbage Collection
    [...]
    The garbage collector's axiom is to conserve system information and discard all noise. [...] Potentially constructed information is substitutable for the actual information derived, just as the results of function applications are often denoted by their persistent invocation points in modern languages, vice identifiers bound to the results' storage location. [...]

    Constructive Implementation
    The plan for the implementation of the Arrow system involves the model of reflection often introduced in the usual texts involving reflection. By subsuming the functionality of the virtual machines upon which the implementators [implementers] base the prototype system, the Arrow system environment may extend to a level that allows a reconstruction of the virtual machine implementation. However, this method simply allows the implementors to remodel the virtual machine, at a heavy cost in human effort. The criteria for the completion of the system include that the semantic capabilities should encompass the description of a minimum amount of functionality of the underlying hardware, as well as some operational algebras that enable code generation. This suggests a model where an initial virtual machine provides a simple basis for development of a knowledge base for the Arrow system, ensuring to include an adequate formal model of the virtual machine. The remaining task would be to completely re-implement the virtual machine's semantics of interface from within the system. To that end, the virtual machine should provide a direct means to access the underlying hardware as well as a simple model of that hardware state-machine. Once the Arrow system completely re-implements the virtual machine successfully, so that the system can bootstrap itself, then it will have accomplished elementary reflection. The final task would then be to place the implementation 'code' within an environment with an adequate number of modelings of the specification, to place the specification in reach of all of the system's main ontologies, thereby enabling model-level reflection.
    Text identifiers constitute information that is only interpretable by the user. As such, the information belongs to the domains associated with user interface within the appropriate part of the system. This information subsumes what usually comprises the documentation for a programming system, traditionally maintained solely by the user. In the Arrow system, a more enlightened view exists, placing this domain under the system's model-level reflective capabilities. Formal theories of the understanding of natural human language may assist in translating ontologies into human-readable documentation, and this process itself is of course available for improvement under the system design. The naturally useful goal is of course to render all documentation as a dynamically calculable interface with the ontology frame that the user prefers. Of course, in any Arrow system wherein information is being added, the system receives all new information as non-calculable, and then proceeds to define it in terms of existing information as specified by meta-information introduced to it. It follows that the system will not be able to completely describe the meaning of its information.
    The information interface between the system and the user is not to be trusted implicitly, since human mental processes are often incalculable for modern machines. In fact, often it is more useful to forgo system understanding in order to facilitate communication between people, given that the lifetime of the information is temporary.

    Conclusions
    While the claims made for the Arrow system's potential seem grandiose, the basis for system's information model lies in modern research systems as well as in the examples of computing systems of the past. The central concept of the system design is to increase the capabilities of the system to analyze and act to improve its performance and utility by generalizing the system's modeling capabilities in a meaningful way. The anticipated result is the greater overall freedom in the use and re-use of information for arbitrary purposes, and to minimize the negative impacts on society associated with maintaining a large body of knowledge. [...]"

    Comment
    Honestly, on page 18 we got the impression that the author wants to take the audience for a ride.

    We have referenced the TUNES project in The Proposal and the section Exotic Operating System of the webpage Links to Software of the website of OntoLinux, but only many years later read about the Arrow Logic and the Arrow System in more detail, and even said a little too quick that "the Arrow System [...] also constitutes one of the foundations for our Ontologic System (OS)" (see also the Clarification of the 6th of March 2017).
    But after looking once again at this and due to our new insights in relation to the other fraudulent and even serious criminal activities in the fields of ABS and IE, specifically in relation to Telecommunication Service Providers (TSPs), and the reasons following in this comment, we have to express considerable doubts about said reference of the TUNES OS.
    Therefore, a better statement would be that it equals the foundational idea or concept, and most aspects of our Evoos virtually 1:1 and only describes it in more detail and by using other terms.
    For example, a brain could also be viewed as a homo-iconic system and because the single unit or node is able to represent relationships it reflects a neural cell or neuron.
    Other highly suspicous points are the suggestion of the Belief, Desire Intention (BDI) agent architecture, the field of component-based software engineering, and so on.
    At this point we do not view this equality and conformity as a happenstance respectively a colusion of interests anymore due to the high complexity of the foundational concept, the exact match with the activities of C.S. in all related fields in the years before and The Proposal respectively our Evoos, which was a modern research system at this time, and our new insights mentioned before.
    Our point of view is also supported by the fact, that Evoos is also based on Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot).

    The functionality and scope of application of the Arrow System equals a

  • kenogrammatic structure, such as the proemial relationship, because it is a mere potential, which will become an actual relation only as either a symmetrical exchange relation or non-symmetrical ordered relation, and also it has two slots, and
  • abstract, self-contained, self-referential, or reflective and interactive binary relational system, such as a triple store system. But we also view and describe it as a human thought without a brain or a spirit without an ether due to the lack of symbol grounding. See also the Clarification Caliber Special #1a 20th of May 2011 and the Clarification Caliber Special #1b of the 21st of May 2011.

    If such a system can be realized in its ideal form at all, is an unanswered question, because

  • fundamental difficulties with the formalization of such a kenogrammatic structure exist,
  • it is similar to the ontological argument or ontological proof, and
  • it remains a riddle how a binary relation should be realized without a concept, like for example the bit and a related state-space architecture, specifically in relation to state machines and translations among them as the primary dynamics of a binary relational system.

    See also the

  • Clarification Caliber Special #1a 20th of May 2011 and the Clarification Caliber Special #1b of the 21st of May 2011
  • document titled "On the isomorphism of sign, logic and language [-] A novel framework for language modelling" and referenced in the section Systems Theory, Complex Systems, Cybernetics-

    Also, the assumption that an atomic relationship called arrow exists in general and that a second arrow and a third arrow exist, and a relationship between them, which is the first arrow, is a conceptual flaw by the author, because the presented cybernetic system will not be able to handle itself completely.
    In fact, our research and development conducted for The Proposal shows that the description of the Arrow System and every other system is part of the first-order system and therefore can be easily expressed with a formal system in general and a formal logical system in particular, such as the classical First-Order Logic (FOL), and signs (for example 0 and 1), strings or words made out of these signs (e.g. bit strings), and a sentence made out of these words (e.g. program). Therefore, the real questions are

  • how the semiotics of the signs, which leads us to a pre-semiotic domain described by kenogrammatics and PolyContextural Logic (PCL), and
  • how small can a first-order system be from the point of view of Algorithmic Information Theory (AIT).

    But here again, the operational model of the kenogrammatic proemial relationship can be implemented with the lambda calculus (see the quote of the document titled "Introducing and Modeling Polycontextural Logics" above). "It may be said therefore, that the approach given here is not a transclassical model, but instead only a particular application and interpretation of a classical formalısm. This restriction must necessarily apply, since the model is formulated within the linguistic framework of classical formal systems and programming languages (ML, HASKELL) i.e. positive languages." See also "Computing with Words" (Clarification of the 14th of May 2016, 8th of July 2016, and 23rd of August 2017) and literate programming.
    At this point NLP and the rest of NMP become extremely interesting and important, because no matter what a human does and a (computing) machine does to be of use and benefit, it must be remembered in a way that can be expressed with a human action.

    We also do not share the concerns related to the meta-instantiation in relation to Object-Orientation (OO 1), because we add robots to the overall system, which are creative and able to build their own hardware, which still can be done without leaving the OO 1 paradigm. For sure, the next argument against OO 1 in this context would be to view the real universe as base context, which leads us back to the ontological argument.

    In the section Graph the way of lazy evaluation and resolution of the graph is discussed. At this point we directly saw in relation to similar works that such an approach is not practical, because one has to do this all the time. Therefore, required is a mechanism that shortens the process respectively reduced the complexity. At this point one can use identifiers or use another concept to store a data or an information.
    For this reason, individual systems are defined, specifically Abstract Machines (AMs) and Virtual Machines (VMs), and all the rest is only relevant for the developer of such a system, because for example such AMs and VMs are very general or universal and can be used in very versatile ways (e.g. programming, executing, storing, and messaging), so that there is no need for more.
    See also the document titled "The Universal Triple Machine: a Reduced Instruction Set Repository Manager" publicated 1988.
    As said elsewhere in this comment, we also come to a point where it just makes no sense to write or program more from the point of view of Algorithmic Information Theory (AIT), because writing or programming more only increases the complexity but does not increase the functionality of a system and its specification, description, or source code.

    {not ready} But this is only the beginning of much more conceptual problems or better said flaws.

    If "the system is specification of atomic relationships (called arrows) between objects that are themselves arrows as well", then where are the symbols, the languages, and the types? We only have a homo-iconic system based on the single concept of this arrow.
    If "the constraint metaphor provides an arbitrary abstraction level, since any object that the system can postulate is available as an arrow for declaration of constraints" and "within a complete information system built from arrows, each arrow is available for statements constructed by other collections of arrows[, which] enables arrow models to build the definitions of objects[==arrows] constructively and incrementally", then the result is only a graph consisting solely of arrows==atomic relationships (the only means of information atomicity, sign, or whatever), all arrows belong to the first-order system, and no model-level reflection is possible in a meaningful way, and a distinction between object level and context-level is only given in certain moment, which questions other properties.
    "The Arrow system blurs the distinction between context type and other types. Essentially, the system renders all types in terms of arrows. Only a context can yield meaning for a given arrow. Such a context is due to a type of abstract virtual machine built dynamically by the environment from the specifications of ontologies." But then everything is virtual, because the specifications of ontologies are given as arrows==atomic relationships, too, and therefore are virtual, too. In addition, we have no signs, languages, and types to specify ontologies in a meaningful way.

    How does an Arrow System knows what is what, for example the following concepts, properties, and tasks:

  • meaning of an arrow,
  • atom,
  • predicates specifying identity, inverse, and composition relations over atoms,
  • categories,
  • binary relation,
  • set-membership relation,
  • arrows as data structure of slot,
  • order of a slot,
  • only reference other arrows by the slots directly,
  • arrows as function specification of ordered pairs specifying functions,
  • is not a language, and
  • has to reify everything as arrows?

    Eventually, there must exist some kind of symbol system (only the virtual symbol arrow) and symbol grounding (meaning). In cybernetics, semiotics and related fields the fields are known and discussed as the

  • Logic without Syntax
  • Symbol Grounding Problem (SGP),
  • Self-Modifying Systems in Biology and Cognitive Science
  • Integrated connectionist models or subsymbolic models (e.g. parallel distributed process, or distributed connectionist, or distributed neural network architecture)
  • Dynamic Symbol System (DSS)
  • Physical Symbol System Hypothesis (PSSH)

    (see also the Clarification of the 29th of April 2016).
    Without solving the symbol grounding problem the domain of reflection and model-level reflection, specifically the human thought and its generalization, and related explanations about the user's way of thinking as claimed by the author of the Arrow System seems to be not given at all.

    See also the works

  • Newell, A., Simon, H.A.: Computer Science as Empirical Inquiry: Symbols and Search. 1976.
    The work formulates the Physical Symbol System Hypothesis (PSSH).
  • Harnad, S.: The Symbol Grounding Problem. 1990.
  • Harnad, S.: Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't. 1994.
  • Miikkulainen, R.: Integrated Connectionist Models: Building AI Systems on Subsymbolic Foundations. 1994.
  • Jaeger, H.: Dynamische Symbolsysteme (Dynamic Symbol System) (DSS). 1994.
    Note that an anytime-algorithm is used for systems, applications, and services, which are executed in real-time, and in many cases even the only possibility.
    Take a look at Fig. 1.1 to see what our Evoos includes. In fact, it is the whole package. See also the works related to the subsumption architecture, such as the documents titled "Intelligence without a Reason" and "Intelligence without a Representation", and also related works discussed in the Clarification of the 18th of July 2021 (about Ubiquitous Computing (UbiC) and Internet of Things (IoT), Cyber-Physical System (CPS), and Building Automation System (BAS) and Building Management System (BMS), which is also known as Intelligent Environment (IE), and also Cybernetical Intelligence (CI)) and of the 18th of February 2022 (about Agent-Based System (ABS), LifeLogging (LL), Mixed Reality (MR), eXtended Mixed Reality (XMR or XR) or eXtended Reality (XR), and also New Reality (NR) and Ontoverse (Ov)).

    The DSS was not a source of inspiration or foundation of Evoos, but more a confirmation of our Caliber/Calibre and a source of inspiration for some detail aspects.
    Obviously, our Evoos has the properties of a DSS as well as a hybrid agent architecture with real-time execution capability in addition to the properties of PCL.
    Furthermore, when we look at Fig. 1.2, which shows the four stages of the DSS, then we see a lot of relationships visualized as arrows.
    Therefore, it is proven that our Evoos could easily be implemented in 1999 and this core system could even be implemented before.
    See also

  • Riga, T., Cangelosi, A., Greco, A.: Symbol Grounding Transfer with Hybrid Self-Organizing/Supervised Neural Networks. 2004

    Systems based on the Binary-Relational Model (BRM) respectively the relationship represented by a triple of the form (subject, relation, object), such as for example a triple store system, have been (re)invented several times before over the last decades again and again.
    See for example the documents titled

  • Sharman, G.C.H., Winterbottom, N.: The Universal Triple Machine [(UTM)]: a Reduced Instruction Set Repository Manager. 1988.
  • Mariani, J.A.: Oggetto: An Object Oriented Database Layered on a Triple Store. 1989.
    "Sharman and Winterbottom,^21 show how Prolog algorithms can be expressed in the terms of their UTM primitives. [...] An important requirement for OODBs is the ability for the schema to evolve. Triple stores are uniquely placed to meet these requirements as the metadata is stored with the data itself,^10^17 and we can thus apply triple store operations to the metadata."
    The work references the following other works among others, whose titles are often self-explanatory:
    • Abrial, J.R.: Data Semantics. 1974.
    • Senko, M.E.: The [Data Description Language (]DDL[)] in the context of a multilevel structured description: [Data Independent Access Model (]DIAM[)] II with FLORAL. 1975.
      DIAM has a layered architecture.
    • Senko, M.E.: Data structures and data accessing in data base systems past, present, future. 1977.
      The work shows "how the three levels of DIAM correspond to ANSI-SPARC schema levels [...]." The American National Standards Institute (ANSI), Standards Planning And Requirements Committee (SPARC) (ANSI-SPARC) Architecture is an abstract design standard for a DataBase Management System (DBMS).
    • Sharman, G.C.H., Winterbottom, N.: The [... UTM ...]. 1988.
      "[The UTM] supports six basic operations to manage a collection of triples. [...] the UTM's operations are capable of supporting the data structures and operations found in a wide range of database and Al systems. [... The authors show how Prolog algorithms can be [directly] expressed in the terms of their UTM primitives. [...] An important requirement for OODBs is the ability for the schema to evolve. Triple stores are uniquely placed to meet these requirements as the metadata is stored with the data itself,^10, 17 and we can thus apply triple store operations to the metadata. [...] The schema of an OODB is usually viewed as the structure built up by the inheritance mechanism, which can be considered as a directed acyclic graph [(DAG)]. The nodes in this graph describe the types and their attributes. The edges of the graph represent the inheritance relationship."
    • Frost, R.A.: ASDAS - A Simple Database System aimed at the naive user. 1981.
    • McGregor, D.R., Malone, J.R.: The FACT database system. [1980 and] 1981.
      The work utilizes the binary-relational model in the field of Artificial Intelligence (AI).
    • Fisher, D.H.: Knowledge acquisition via incremental conceptual clustering. Machine Learning 2. 1987.
      "Schema evolution as described in this paper is static, in that we cannot dynamically form new sets through some kind of data analysis, i.e. clustering. These issues are addressed in refs 8 [Knowledge acquisition [...]. 1987], 13 [[...] UNIMEM. 1987] and partially applied to OODBs [...].
    • Lebowitz, M.: Experiments with incremental concept formation: UNIMEM. Machine Learning 2. 1987.
  • Mariani, J.A., Lougher, R.: TripleSpace: an experiment in a 3D graphical interface to a binary relational database. 1992.
    The work is used for cognitive engineering and Virtual Reality (VR).
  • Sawyer, P., Colebourne, A., Mariani, J.A., Sommerville, I.: Interactive Database Objects. June 1994.
    The work "describes a user interface framework called Moggetto for an object-oriented database system (OODB)" simply described as Oggetto with MOG editable widgets.
  • Shenoy, R.: Investigation of the Use of the Object-Oriented Paradigm in the Construction of a Triple Store based on Dynamic Hashing. 1994
    The work references the following other works among others, whose titles are often self-explanatory:
    • Abrial, J.R.: Data Semantics. 1974.
      "According to Mariani [27 [Oggetto: An Object Oriented Database Layered on a Triple Store. 1989 and 1992.]] "The binary-relational model first came into prominence in 1974 [4] and was further developed by Senko's work on DIAM (data-independent access model) I and II [38 [The DDL [...]. 1975]]".
    • Senko, M.E.: The DDL [...]: DIAM II with FLORAL. 1975.
    • Shave, M.J.R.: Entities, functions and binary relations: steps to a conceptual scheme. 1981.
      "The binary-relational view of the universe is increasingly being used during the data-analysis stage of a database system design [...]."
    • Frost, R.A.: Binary-Relational Storage Structures. 1982.
      The work utilizes the binary-relational model in the field of Natural Language Processing (NLP).
    • McGregor, D.R., Malone, J.R.: The FACT database: A system based on inferential methods. 1980 [and 1981].
    • Kim, Won: Object-Oriented Databases: Definition and Research Directions. 1990.
      The work utilizes the binary-relational model in the field of Artificial Intelligence (AI).
    • Mariani, J.A.: Oggetto [...]. [1989 and] 1992.
    • Czejdo, B.D., Taylor, M.C.: Integration of object-oriented programming languages and database systems in KOPERNIK. 1992.
      The OODB allows uniform specification of database requests and application programs. The user interface is based on Smalltalk. The OOD model is represented in terms of classes and messages and implemented on top of a relational database system. The binary-relational view is used as an intermediate level between the underlying database and the conceptual view.
    • Poulovassilis, A.: The Implementation of FDL, a Functional Database Language. 1992.
    • Azmoodeh, M.: BRMQ: A Database Interface Facility based on Graph Traversals and Extended Relationships on Group of Entities. 1990.
      A binary-relational model is used as the heart of a coexistent database system architecture.
    • Sharman, G.O.H., Winterbottom, N.: NBD: Non-Programmer Database Facility. 1979.
    • Frost, R.A.: ASDAS [...]. 1981.
    • McLeod, D.,Afsarmanesh, H., Knapp, D., Parker, A.: An Object-Oriented Approach to Extensible Databases for [Very Large Scale Integration (]VLSI[)]/[Computer-Aided Design (]CAD[)]. 1985.
      The work discusses the utilization of the Object-Oriented (OO 1) paradigm and Object-Oriented DataBases (OODBs) in relation to Computer-Aided technologies (CAx), specifically the Object-Oriented Analysis and Design (OOAD) for Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM), but also Computer-Aided Engineering (CAE) and Computer-Aided Software Engineering (CASE).
    • Mariani, J.A., Lougher, R.: TripleSpace [...]. 1992.
      The work explores the possibilities of the binary-relational model for the graphical representation in a three-dimensional topology, specfically in relation to the fields of 3D Graphical User Interface (GUI) (to a binary-relational database) and Virtual Reality (VR).
  • Kennedy, J., Barclay, P.: Desktop Objects: Directly Manipulating Data and Metadata. 8th to 10th of June 1996.
    The work integrates the desktop metaphor and the Object-Oriented DataBase (OODB) Oggetto to the Oggetto Desktop.

    With the exception of the first document, all other documents are already referenced in the sections Semantic File/Storage System and Visualization of the webpage Links to Software. See also the additonal notes in the related Website update of the 6th of March 2017 and 4th of January 2018.

    The idea behind Evoos is obvious: evolution, bionics, logics, cybernetics, architecture, layered architecture, hybrid architecture, fractal, holonic, reflective, parallel, distributed, ANN, BRM, Dos, MAS, CS or CA and CAS, Natural Multimodal Processing (NMP), realities, etc..

    The Arrow System discusses only an Abstract Virtual Machine (AVM) (see also for example the Peer-to-Peer (P2P) Virtual Machine (VM) (P2P VM) Askemos) and "low-level (byte-order-dependent) protocols" respectively endianness, which implies that it does not discuss fields like

  • operating system-level virtualization or containerization, and also
  • Software-Defined Networking (SDN).

    Also referenced in the section Exotic Operating System of the webpage Links to Software is the work:

  • Folliot, B., Piumarta, I., Riccardi, F.: Virtual Virtual Machines [(VMMs)]. 1997.
    This work solves the "problem of rigidity in VM environments, which eventually leads to lack of interoperability. Our proposal renders the VM environment flexible, removing this final objection".

    Due to the older works related to reflective systems, specifically the fields of

  • proemial and polycontextural Cybernetic System (CS of CybS),
  • reflective Distributed operating system (Dos) Apertos (Muse) and Cognac based on Apertos (Muse), which is very similar to the TUNES OS with only very few differences in their basic properties, and
  • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),

    model-level reflection is not really new.

    Obviously, somebody has not done her, his, or their homework or just forgot to reference properly (not really).
    Eventually, the originality and uniqueness of the Arrow System should be viewed as highly questioned, because it is Güther's proemiality relationship and kenogram, which is camouflaged by citing Robinson diagram, and somehow also our Evoos. In addition, we have shown that our Evoos does not depend on the TUNES OS project, because our integration of parallel, distributed ANN, cybernetic logics, and multi-layered Distributed os Aperion (Apertos (Muse))) already provides the same basic foundations, but very suspiciously the Arrow System is very similar to our Evoos and even surfaced exactly at the same time.
    The points left are fully-reflective system, agent, ontology, and inclusion of user.
    The fully-reflective system property lacks a grounding and also a Zero Ontology, Null Ontology, Ontologic Zero, or Ontologic Null (see also the Clarification 11th of February 2019). We added it and as everybody knows the move from the roman number system to the arabic number system was already a shift of civilization (see also the Ontologic Net Further steps #2 of the 18th of February 2019).
    The term agent is used for a lot of natural and artificial things. Due to the lack of explicit addressing the fields of Agent-Based System (ABS) and Agent-Oriented Programming (AOP), we have to assume that the term was used with its general meaning.
    The ontology is a term used for a lot of philosophical and technical things. Indeed, the term is used in relation to the fields of Computer Science (CS), Artificial Intelligence (AI), and Cybernetics (Cyb), specifically in relation to specifications, knowledge bases, and graph-based systems. But the work includes no reference to the fields of Semantic (World Wide) Web (SWWW), Dynamic Semantic Web (DSW), Linked Data (LD), and related fields and subjects. We also have prior art that covers most of the graph-based foundation and the Binary-Relational Model (BRM).
    Because "such [cybernetic] entities usually involve an incomplete union between man and machine", a user is included in the scope of the Arrow System, but merely as a reader of code and as a provider of input at the User Interface (UI), specifically the Human-Machine Interface (HMI) or Man-Machine interface (MMI), or being more precise, the Human-Computer Interface (HCI),

  • data input through ordinary file operation and keyboard operation, and
  • signal input through ordinary point and click actions respectively two-dimensional Graphical User Interface (2D GUI) and pointing device operation,

    and due to the lack of multimodality a user-reflection is not given.

    The term metaphor is only used as a different term for paradigm in the context of the Arrow System.

    At this point, one can see that the Arrow System is an (expression of) idea of a system, that is based on a relationship and entities, which again are reified relationships and can be reified as relationships, and that many different concepts can be created as concrete realizations, which more or less come close to the unreachable complete realization of said idea. This leads directly to the real problem of the implementation of such a system without making more or less compromises to its basic philosophy and concept.
    Indeed, the author acknowledges this problem with his approach to establish the base context respectively bootstrap the first-order system of an Arrow System, which somehow reflects our approaches of bootstrap described in The Proposal.
    In addition, most of such an implementation would only equal already existing systems based on a triple store respectively binary relational database (see the list of prior art in this field below), which can easily implemented with a reflective functional programming language (see for example Lisp, Scheme, ML, Haskell, etc.) and utilized as the runtime environment and development environment, inclusive a Visual Language Programming (VPL) environment.

    Obviously, the TUNES OS project failed to come up with a running system in total contrast to our Evoos and OS, as proven with the field of Grid, Cloud, Edge, and Fog Computing (GCEFC) and also Space Computing (SC), the various Ontoscope (Os) variants, and so on.
    The same holds for the

  • Ubiquitous Computing (UbiC) or Pervasive Computing (PerC) and Internet of Things (IoT), Affective Computing (AC or AffC), Autonomous System Web or Immobot Web,
  • Semantic (World Wide) Web (SWWW), including the Web 3.0,
  • Dynamic Semantic Web (DSW), and
  • Metaverse and 3D Web,

    in total contrast to our OS with its Ontologic Web (OW), Ontologic Net (ON), and Ontologic uniVerse (OV) respectively Ontonverse (Ov).

    We do not buy it anymore, because something is wrong with that TUNES OS project in general, and that Arrow Logic and Arrow System in particular.

    Furthermore, we have the impression that the paper was not the creation of a single author and that it was not a spare time project.
    In fact, that work is based on a lot of informations from various fields provided by other entities and collected by the author without proper referencing. This point of view is supported by the much too short list of references with only 10 references, which should list at least 5 times more works related to all the discussed fields.

    The timing is odd for exactly the same reasons, that we said in relation to the fields of Agent-Based System (ABS) and Intelligent Environment (IE), and later in relation to the fields of Cybernetical Intelligence (CI) and Cyber-Physical System (CPS).
    The final version is titled "The Arrow System Philosophy" and was publicated on the 6th of January 2000, but the only differences from its first publication as version 8 are the additions of the term philosophy to the title and an index of the paper.
    This shows that they all observed our work on The Proposal and waited since 2000 in case there is more to steal after the publication of The Proposal describing our Evoos on the 10th of December 1999, but (at first) there was nothing interesting for them in this regard.

    It is also unusual to designate such a paper as a proposal, which also reflects The Proposal of C.S. another time.
    That we also find

  • homoiconicity and homo-iconic==the same-representation or self-representing system, which reminds us of a cell and a holonic system,
  • ontology,
  • agent,
  • human thought,
  • cybernetic entity usually involves an incomplete union between man and machine,
  • computing system,
  • information system,
  • database,
  • knowledge base,
  • virtual machine,
  • aspects,
  • intensionality,
  • natural evolution, and
  • negative impacts (of os)

    in one work is also highly suspicious now.

    Honestly, that is much too much similarity with our activites of research and development, and creation done at that time and therefore we do not think about a happenstance in this case and all the other cases with such and odd footprint and suspecious deficits since many years anymore, but wonder more and more about who has got informations about our activities done at that time and distributed them around the world.
    Today, we do know by our investigations and other forensic activities that the espionage, unauthorized sharing of informations, and other fraudulent and even serious criminal actions must have begun around the year 1998 and continued in the year 1999 and the following years. One or more entities wanted to destroy our work of art and did so to a broad and deep extent across the related fields and communities. And we have a pretty clear idea in which direction we do have to look.

    As usual in the fields of philosophy and cybernetics, there is a lot of talk, but only a few of solution and a lack of implementation.

    For sure, we do know the prior art very well and therefore we are able to show that none of them challenges the originality and uniqueness of our work of art described in The Proposal.
    The Arrow System has no relation to

  • networking, specifically World Wide Web (WWW),
  • Semantic (World Wide) Web (SWWW) (TUNES OS has Metatext, which should be similar),
  • Machine Learning (ML), Artificial Neural Network (ANN), Computer Vision (CV), Simultaneous Localization And Mapping (SLAM), Computational Intelligence (CI), Fuzzy Logic (FL), ANN, Evolutionary Computing (EC), Soft Computing (SC), Swarm Intelligence (SI) or Swarm Computing (SC), etc. respectively SoftBionics and HardBionics exclusive Artificial Intelligence (AI),
  • Artificial Life (AL),
  • hybrid agent architecture with reflective architecture, or subsymbolic computing or processing, or other properties,
  • Agent-Oriented Programming (AOP),
  • holonic system,
  • Holonic Agent System (HAS),
  • Multi-Agent System (MAS),
  • Autonomic Computing (AC),
  • Resource-Oriented Computing (AC),
  • Service-Oriented technologies (SOx),
  • smart contract transaction protocol,
  • blockchain technique,
  • multimedia,
  • real and physical, cybernetical and digital, and virtual and metaphysical (information) spaces, environments, worlds, and universes respectively realities,
  • fusion of realities,
  • and much more.

    only philosophical ontology, computational ontology, simultaneity, and interactivity, but no MAS implies no parallelity of multiple agents and therefore no PCL.
    only 2D GUI with pointing device (e.g. mouse).

    Due to the direct relation to the Distributed operating system (Dos) TUNES OS, the Arrow System has an indirect relation to

  • Distributed System (DS) or Distributed Computing (DC),

    but it lacks a direct relation to them.

    Due to the domain of human thought (and Artificial Intelligence (AI) and Knowledge Management (KM)), the Arrow System has an indirect relation to

  • brain-like system, Associative Memory (AM) or Associatively-Addressable Memory (AAM) (e.g. Content-Addressable Memory (CAM), BlackBoard (BB) (e.g. Tuple Space (TS)) system, Space-Based technologies (SBx)),
  • Cognitive Agent System (CAS),
  • ...

    but it lacks a direct relation to them.

    Because we are already talking about a specific implementation of a multiparadigmatic Computing System (CS), Information System (IS), and Knowledge-Based System (KBS), and also Development Environment (DE), a lack of direct relation is considered as having no relation.

    As a result of our next review of the Agent-Based System, PolyContextural Logic (PCL), Arrow System, etc. we would say that our Evoos is our integration of the

  • triple store system, including Oggetto,
  • non-classical logics, including many-valued logics, including Fuzzy Logic (FL), etc.,
  • cybernetical logics, including PolyContextural Logic (PCL), Arrow Logic and Arrow System, etc.,
  • Distributed operating system (Dos, including Aperion (Apertos (Muse)) and TUNES OS,
  • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),
  • Multi-Agent System (MAS), including FIPA,
  • Holonic Agent System (HAS), including InteRRaP,
  • Intelligent Virtual Environment (IVE), including Social Interaction Framework for Virtual Worlds (SIF-VW) and Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents (CoMMA-COGs),
  • Cognitive Agent System (CAS), including SOAR,
  • (foundation of) Network Virtualization (NV),
  • (foundation of) microService-Oriented Architecture (mSOA),
  • (foundation of) Software-Defined technologies (SDx), including Software-Defined Networking (SDN),
  • Multimodal User Interface (MUI), including MultiModal Dialogue System (MMDS),
  • eXtended Mixed Reality (XMR or XR), including AR, AV, and VR,
  • and so on.

    We quote a document, which is a draft of the specification for a reflective distributed operating system and was publicated on the 8th of April 2003: "The TUNES System Specification
    Overview

    Purpose
    This document presents schematic description of the requirements for a TUNES system architecture. This is a working draft, designed to provide precise technical feedback to TUNES project members as design issues are resolved. Implementation strategies are suggested, but not specified.

    Scope
    This document specifies the basic requirements required to satisfy the definition of a TUNES system as set out at the TUNES Project website [...]. [...]

    History
    The TUNES Project has existed for many years in an early planning and speculation stage. Several projects exist to advance its goals, but coordination was found to be necessary to help guide the project and solidify the goals.

    [...]

    Subsets
    This specification defines a standard subset in various sections which is considered suitable for bootstrapping. This subset may be altered to suit dynamic requirements as the initial construction and bootstrap process proceed.
    [...]

    The System

    Introduction
    A running TUNES system consists of a self-supporting CONFIGURATION of OBJECTs.

    Description
    A TUNES system is distinguished from a programming language, an operating system, or various mixtures of these concepts as such. TUNES is first and foremost an environment; that is, it is defined by the services it provides rather than the form, structure, or interface that those services take on in a given implementation. The project is moreover an attempt to separate service or interface from implementation in a very broad, systematic sense, while providing a coherent environment in which many implementations may co-exist and provide the same services in different CONTEXTs.

    Requirements

    Fully-Reflective
    [...]

    Unified The system must provide standard support for unification of system ABSTRACTIONs. This requires at least that for any two TYPEs of OBJECT within the system and corresponding CONTEXTs, there must exist some standard way to express (or generate the expression of) each OBJECT in the opposite CONTEXT.^1 ^1 This requirement's formulation is flawed.

    Verifiable
    The system must contain and be able to subject its parts to a means of mechanical verification of assertions within contexts. There must also be a published means of communicating these results when migrating software.

    Higher-Order
    [...]

    Self-Extensible
    [...]

    [...]

    Fault-Tolerant
    A TUNES system must provide some means for assuring that no mismatch or variation of expected behavior will interrupt the system as a whole.

    Distributed
    Any operation or object should be implementable or re-implementable by a coordination of many other objects without restriction in expressiveness. This applies to physical distribution as well as semantic distribution.

    Scalable
    [...]

    Aspects
    [...]

    Types
    Object All elements of the system. Within this document, any unqualified use of this term means any element of a TUNES system. Objects may be manifest or implicit in a particular context, but there will at all times be some mechanism(s) available to make any object manifest within a context.
    [...]
    Meta-Object The term for an OBJECT which deals with some part of the essential characteristics of another. An object can be a meta-object of another object independently of whether or not it provides some function of the implementation of that target object. [...]
    [...]

    [...]

    Meta-Linguistic
    Introduction
    The Meta-Linguistic aspect of a TUNES system includes all objects which deal with the expression of languages and relation and translation issues between them.

    [...]

    Operations
    [...]
    Rewrite Applying EFFECTs specified by the program to the system.
    [...]

    Interface
    [...]

    Types
    User An OBJECT representing a particular human user or agent thereof that interacts through a TERMINAL device. User objects have an associated environment which carries the vocabulary and preferences specific to that user.
    Gesture A unit OBJECT of input or output interaction. The TERMINAL device's characteristics determine the possible granularity of these objects.
    [...]
    Medium An OBJECT representing a particular interaction device, with its characteristics, behavior, and state. All objects of this TYPE are subject to a generic protocol which allows for abstraction within the limits of the device capabilities.
    Display An OBJECT representing an abstract rendering device.^3
    ^3 Is this redundant with a MEDIUM?
    [...]
    Morph An object whose context is a display. Objects of this type are those which offer the user some form of direct manipulation and do not inherently concern the presentation of some other object.
    World An object representing some site on a DISPLAY.
    Portal An object representing some channel between sites or a bus among sites on a display.

    [...]

    Migration
    The Migration aspect of a TUNES system includes all objects which describe the identity and relate to mechanisms for moving or duplicating objects between any kind of contexts.

    [...]

    Types
    Module A CONFIGURATION of OBJECTs with formal requirements for comprehension and formal provisions.
    Site A source or target of communication.
    Protocol A medium of communication. Precisely, a language/encoding.
    Stream [...]
    [...]
    Space ^4 A sharable OBJECT which provides a means of accessing publications
    ^4 Revise this term's name
    [...]
    Persistent Store
    [...]

    [...]

    Low-Level
    [...]

    Types
    Bit A single discrete unit of memory with two possible states.
    Word A vector of BITs defined as a unit of memory for some (possibly abstract) machine.
    [...]

    [...]

    Subsets

    Core/Bootstrap Language Semantics
    [...]

    Bibliography"
    [1] The TUNES Project [...]
    [2] The Common Lisp Hyperspec, derived from the ANSI Common Lisp standard (X3.226-1994) [...]"

    Comment
    The quoted document was written by the same author, who also wrote the document titled "The Arrow System" quoted before.

    In the Arrow System migration was only about knowledge.

    Common Lisp (CL) is a multiparadigmatic programming language, which includes the paradigms procedural, functional, object-oriented, meta, reflective, and generic. Ooops. Obviously, we have a conceptual scheme based on the Binary-Relational Model (BRM), object-oriented triple store, etc..
    We are also talking about the object-oriented, reflective Distributed operating system Aperion (Apertos (Muse)).
    And because our Evoos references the TUNES system, including the Arrow System, it also integrates the triple store concept and the other Dos in addition to the fields of Natural Multimodal Processing (NMP), Multiparadigmatic User Interface (MUI), Model-Based Autonomous System or Immobile Robotic System (ImRS or Immobot), Holonic Agent System (HAS), Multi-Agent System (MAS), Cognitive Agent System (CAS), Dynamic Symbol Stream (DSS), and our Resoure-Oriented Computing (ROC), Autonomic Computing (AC), foundations of os-level virtualization, mSOA, SDN, fusion of realities, etc., etc., etc..

    At that time, the proposed TUNES system was not scalable due to the lack of solutions. As we explained in the past we solved this problem with our polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS). See the

  • Ontologic Net Further steps of the 20th of February 2019,
  • Ontologic Net Further steps of the 23rd of February 2019,
  • OntoLix and OntoLinux Website update of the 10th of March 2019, and
  • OntoLix and OntoLinux Website update of the 12th of March 2019

    (keywords lookup and tuplespace or tuple space) for details and also the comment to the quote document titled "Meeting the Computational Needs of Intelligent Environments: The Metaglue [Multi-Agent] System" in the Clarification of the 18th of February 2022.

    In the Website review of the 5th of March 2017 and the Clarification of the 6th of March 2017 we already publicated the result of our review of the TUNES project.
    Potentially, a mailing list and an FTP archive existed since 1995, as the author suggests. But there was virtually nothing on the website of the TUNES project in the end of the year 1998 with the exception of links to said mailing list and FTP archive, which most potentially was only said to confuse the public about prior art, which again does not exist at all or not in the required quantity and quality.
    In fact, the webpage of the TUNES project was only filled with more and more content in the year 1999. What was publicated in 1999 followed exactly our research and development, and creation in relation to The Proposal describing our Evoos.
    {comparison will be done to prove} Between the version 8 publicated on the 24th of April 1999 and what seems to be the last publicated version or the final version publicated in January 2000 only the title of the paper changed by adding the term philosophy to the title and an index.
    The TUNES OS project in general, and the Arrow Logic and Arrow System in particular match what we have described in The Proposal, but does not go beyond our Evoos, which is always an indicator of espionage or plagiarism.
    Obviously, the goal of the one or more responsible entities was to steal as much as possible without taking any risk, therefore no explicit MAS, CAS, os, etc., but only implicit, and potentially after they got the infromation about the date of publication of the first version of The Proposal they decided that there is not coming more to steal and hence there is no need to wait longer anymore.
    The term philosophy and the lack of implementation show that the Arrow System is more an idea than a system architecture or a system implementation.
    The development of the TUNES OS abruptly stopped around 2001 and the author merely worked on a wiki for that project and the relatively rudimentary draft titled "The TUNES System Specification" and publicated on the 8th of April 2003. The stop of an acitivity which equals one of our activities in the moment we do not provide anything to steal anymore is another indicator of espionage or plagiarism.
    This draft makes the overall situation clear and tried to cure some of the deficits of the Arrow System in relation to real and physical, cybernetical and digital, and virtual and metaphysical (information) spaces, environments, worlds, and universes respectively realities, as well as implementation by a reflective functional programming language, as not explained otherwise in the comment to the Arrow System (see the related quote above).

    Sometimes later, the (alleged) author of the quoted document titled "The Arrow System" began to work on the programming language Slate, which is based on the programming languages Self and Beta, the later is referenced in the quoted paper. The implementation of Slate is done in the programming language Squeak, which was derived from the programming language Smalltalk-80 and is based on the graphics library Morphic of Self, which again is described as graphical direct manipulation interface framework (see also the documents titled "Interactive Database Objects" and "Desktop Objects [...]" in relation to Oggetto again). Squeak is also the basis for Open Cobalt, which is a software for constructing, accessing, and sharing 3D Virtual Worlds both on local area networks or across the Internet, with no need for centralized servers. The technology makes it easy to collaborative and hyperlinked multi-user virtual workspaces, virtual exhibit spaces, and game-based learning and training environments, exactly what the author has missed to steal. See the Clarification of the 18th of February 2022, specifically the documents titled "Agent Chameleons: Agent Minds and Bodies", "Agent Chameleons: Virtual Agents [Powered By] Real Intelligence", and "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents", and also the other related works discussed therein.
    One of the other authors of the TUNES OS project is trying to sell technologies, goods (e.g. applications), and services based on the blockchain technology, specifically those, which are based on our Distributed Ledger Technology (DLT) and Decentralized Web (DWeb), Decentralized Finance (DeFi), and Decentralized Commerce (D-Commerce), also wrongly called Web3©™, which was created with our OS and is included in our Ontoverse (Ov).

    Today, we do know that the France Télécom, specifically its Centre National d'Études des Télécommunications (CNET)/DTL/ASR Research and Development (R&D) Department for Distributed Systems Architecture, was one of the fraudulent entities, as we have observed with other Telecommunication Service Providers (TSPs) in relation to the field of ABS (see once again the Clarification of the 18th of February 2022 and also the Clarification of the 13th of April 2022).
    Taken all together shows that our allegations are substantial and not just based on fantasy, personality disorder, paranoia, specifically persecution mania, or another individual psychological deficit.

    We quote a document about the fields of Multi-Modal Dialogue System (MMDS) respectively Multimodal User Interface (MUI) and Semantic (World Wide) Web (SWWW), which was publicated in January 2003: "More on Less: Further Applications of Ontologies in Multi-Modal Dialogue Systems
    [...]
    Historically, the ways in which knowledge has been represented in dialogue systems show that individual representations with different semantics and heterogeneously structured content can be found in various formats within single systems and applications. The diversity and heterogenity of knowledge representation in earlier systems orginates in the fact that each knowledge store is handcrafted individually for each task. Additionally, we find a multitude of formats and inference engines, which often cause both performance and tractability problems.
    We present how ontologically modelled knowledge is employed in the SmartKom system, based on the work introduced in [...] (2003c [Less is more: Using a single knowledge representation in dialogue systems]). In this paper, we present additional benefits of employing a single knowledge store through a multi-modal dialogue system (MMDS) and extensions of the earlier work. [...]

    The Representational Formalism Used
    The formalisms pertinent to the following description of the ontology originate in various W3C and Semantic Web projects. These brought about knowledge modeling standards, such as the Resource Description Framework (RDF), the DARPA Agent Mark-up Language (DAML), the Ontology Interchange Language (OIL) and the Web Ontology Language (OWL).[...] This allows to represent domain and discourse knowledge in ontologies using XML-based semantic mark-up languages, such as OIL, or DAML+OIL. In the work reported here, we used an ontology defined in the OIL-RDFS syntax. OIL-RDFS is a representation format which allows to express any OIL ontology in RDF syntax. This has the advantage that the ontology is partially understandable for non-OIL aware RDFS applications. Additionally it allows for all the formal semantics and reasoning support available for OIL.
    [...] (2001 [OIL And Ontology Infrastructure for the Semantic Web]) provide a detailed characterization of the formal properties of the OIL language. The [Fast Classification of Terminologies (]F[a]CT[) ...] system can be used as a reasoning engine for OIL ontologies, providing some automated reasoning capabilities, such as class consistency or subsumption checking. Graphical ontology engineering front-ends and visualization tools are available for editing, maintaining, and visualizing the ontology.
    The OIL semantics is based on a combination of frame - and description logic extended with concrete datatypes. It provides most of the modeling primitives commonly used in the frame-based knowledge representation systems, e.g. frames are used to represent concepts. [...]
    [...]
    Schemes based on this combination of frame- and description logic allow to represent enough knowledge for the effective operation of NLP applications described in Section [Applications in SmartKom]. [...]

    The SmartKom Ontology
    As one of the most advanced current systems, the SmartKom system ([...] 2001 [SmartKom: Multimodal communication with a life-like character]) comprises a large set of input and output modalities together with an efficient fusion and fission pipeline. SmartKom features speech input with prosodic analysis, gesture input via infrared camera, recognition of facial expressions and their emotional states. On the output side, the system features a gesturing and speaking life-like character together with displayed generated text and multimedia graphical output. It currently comprises nearly 50 modules running on a parallel virtual machine-based integration software called [MUltiple Language / Target Integration PLATform FOR Modules] Multiplatform described in [...] (2003 [Multiplatform testbed: an integration platform for multimodal dialog systems]).
    Complex MMDS such as SmartKom require a homogeneous world model, that serves as a common knowledge representation for various modules throughout the system. It represents and brings together a general conceptualization of the world (top-level or generic ontology) as well as of particular domains (domain-specific ontologies). This way, the ontology represents languageindependent knowledge. Language-specific knowledge is stored elsewhere, e.g. in the lexicon containing lexical items together with their meanings defined in terms of ontology concepts.
    [...] This existing ontology was adopted in the SmartKom project and modified to cover a number of new domains, e. g., new media and program guides, personal assistance system and standard applications. The top-level ontology was re-used with some slight extensions. Further developments were motivated by the need of a process hierarchy. This hierarchy models processes which are domain-independent in the sense that they can be relevant for many domains.
    [...] The purpose of the top-level ontology is to provide a basic structure of the world, i. e., abstract classes to divide the universe in distinct parts as resulting from the ontological analysis (1995 [Formal ontology in conceptual analysis and knowledge representation]). [...] Ontology construction on this level is rather a matter of constant negotiation, which distinctions to make. [...] Once available, the ontology was augmented with comments containing definitions, assumptions and examples that facilitate its appropriate use in a multi-component system such as SmartKom and its possible re-use in other systems. Such descriptions of ontology classes are particularly important as the meanings associated with them may vary considerably from one ontology to another.

    Applications in SmartKom
    Natural language understanding
    A template based semantic parser ([...] 2002) is used for the task of natural language understanding. Similar to production systems templates modify a working memory (WM) which is initially filled with the input words delivered by the speech recognizer. Then the templates transform the initial words step-by-step first to simple instances of the ontology and afterwards combine these instances. Each template consists of a condition and an action part. The condition part checks the presence of certain words or instances of certain classes in the WM. The action part creates one or more new instances that may contain slots filled with instances matched by a condition.
    [...] To avoid the production of output which is inconsistent with respect to the ontology, automatic syntactic and semantic checking should be available.
    [...]
    To prevent the generation of invalid output the templates are checked against the ontology while they are loaded. In this way, instances of classes not defined in the ontology (e.g., caused by typos in the templates) cannot even be constructed. [...]

    Multimodal Fusion
    The task of a multimodal fusion component within the SmartKom system is to combine or integrate the hypotheses produced by the analyzers of the different modalities. For example, a speech recognition hypothesis containing a deictic expression and a simultaneously performed pointing gesture (dereferencing an object displayed on the screen) are fused into a single hypothesis by replacing the deictic expression with the indicated object.
    [...] deictic expressions are omitted due to either recognition errors or vague or reduced utterances. There the ontology can be helpful to combine them.
    [...] We apply the ontology in order to be able to search for the appropriate position for an analyzed gesture within a speech recognition hypothesis containing no referential expressions. [...]

    Semantic Coherence Scoring
    [...] introduce the notion of semantic coherence as a measurement for scoring sets of concepts with respect to the existing knowledge representation. They show how that it can be applied to the task of classifying automatic speech recognition hypotheses (SRH) as coherent and incoherent. [...] The applications thereof provide a mechanism that increases the robustness and reliability of multi-modal dialogue systems.
    [...] Facing multiple representations of a single utterance poses the question, which of the different hypotheses corresponds most likely to the user's utterance. Additionally if a hypothesis has been chosen, a choice has to be made which of the possible interpretations is the best one.
    [...]
    The software for scoring the sets of concepts and classifying them in terms of their semantic coherence employs the ontology described herein. This means, that the ontology crafted as a general knowledge representation for various processing modules of the system is additionally used as the basis for evaluating the semantic coherence of sets of concepts.
    [...]

    Computing Dialogue Coherence
    By viewing the instances of the ontology as typed feature structures we can use unification and unification-like operations for the enrichment and validation of user hypotheses in the discourse module [...].
    [...]
    Ontologies allow for closed world reasoning based on the types in the inheritance hierarchy. In contrast to non-monotonic operations like default unification, [...] which assume an open world we use the types in the hierarchy in a fashion similar to priority union [...]. [...] their default unification allows for a natural and convenient way for interpreting elliptical phenomena. Important here is that this approach together with a proper domain model - our ontology - makes it possible to inherit discourse information by combining new information with old one in a straightforward way. Another advantage is that we have a well defined operation with a well defined semantics ([...] 2003 [The Formal Foundations Underlying Overlay]).
    [...]

    Dialogue Management
    The dialogue manager constructs and executes plans of communicative actions to accomplish the user's goals in the dialogue system.
    In the SmartKom system, eleven different applications are integrated, each providing a set of services. These services require cross-application cooperation as well as mixed-initiative subdialogues with the user. The representation of the system domain provided by the ontology was to be used to model these interactions uniformly and consistently. [...]
    The applications were defined in a plan language that models the actions necessary to carry out processes provided by the ontology corresponding to the services offered by the system. This is done in terms of dialogue games, where the dialogue engine communicates with the user, application modules, or other parts of the dialogue system.
    Some applications provide functionality for the user, some implement services for other applications. In terms of the ontology, the applications together with the dialogue system are seen as being able to realise a set of processes, requiring or making available the corresponding roles. [...]
    The operations on processes and roles exchanged between the dialogue manager and its dialogue game partners are encoded in the ontology-derived XML schemata [...].
    Processes and roles that are uniform across communication channels facilitate operations that involve integrating system output, user input and intra-system communication. [...]

    Generating Interface Specifications
    In this additional application, we proposed to use the knowledge modeled in the ontology as the basis for defining the semantics and the content of information exchanged between various modules of the system ([...] 2003b [Automatic creation of interface specfifications from ontologies]).
    In NLP systems, modules typically exchange messages [...] The increasing employment of XML-based interfaces for agent-based or other multi-blackboard communication systems sets a de facto standard for syntax and expressive capabilities of the information that is exchanged amongst modules. [...]
    As discussed above, ontologies are a suitable means for knowledge representation, e.g. for the definition of an explicit and detailed model of a system's domains. That way, they provide a shared domain theory, which can be used for communication. Additionally, they can be employed for deductive reasoning and manipulations of models. The meaning of ontology constructs relies on a translation to some logic. [...]
    [...] Ideally, the definition of the content communicated between the components of a complex dialogue system should relate both the syntax and the semantics of the XML documents exchanged. Those can then be seen as instances of the ontology represented as XMLS-based XML documents. [...]
    The solution proposed states that the knowledge representations to be expressed in XMLS are first modeled in OIL-RDFS or DAML+OIL as ontology proper, using the advantages of ontology engineering systems available, and then transformed into a communication interface automatically with the help of the software developed for that purpose.
    [...]
    The resulting schemata capture the hierarchical structure and a significant part of the semantics of the ontology. We, therefore, provide a standard mechanism for defining XMLS-based interface specifications, which are knowledge rich, and thus can be used as a suitable representation of domain and discourse knowledge by NLP components. Since the software that has been developed completely automates the transformation process, the resulting XMLS are congruent with the XML schema specifications. Furthermore, the ontology can be re-used in multiple systems as a single ontology can be used to generate application-specific communication interfaces.
    However, the main advantage of our approach is that it combines the power of ontological knowledge representation with the strengths of XMLS as an interface specification framework in a single and consistent representation. Our experience shows, this would not have been possible for a complex dialogue system, if XML schemata were defined from scratch or hand-crafted, and constitutes a step towards building robust and reusable NLP components.

    Concluding Remarks
    In this paper, we presented further application of an ontology which is used as a single knowledge representation in a multi-modal and multi-domain dialogue system, namely natural language understanding, mulimodal fusion, semantic and dialogue coherence scoring, dialogue management and interface specification. Additional applications, excluded in this descriptions, can be found in the system's output pipline, e.g. the dynamic help system and natural language generation.
    [...] these examples suffice to strengthen the claims made in [...] (2003c [Less is more: Using a single knowledge representation in dialogue systems]) substantially. Firstly, this concerns the benefits of using a single knowledge representation throughout a dialogue system as opposed to using multiple knowledge representations and formats. Secondly it concerns the additional advantages of such a homogeneous world model that defines the processing interfaces as well as the system's world knowledge, as costly mappings between them are no more necessary. This means that modules receive only messages whose content is congruent to the terminological and structural distinctions defined in the ontology.
    [...]"

    Comment
    authors European Media Lab and hence SAP and joint venture DFKI
    The foundation is already given in the document titled "Less is More: Using a Single Knowledge Representation in Dialogue Systems" and cited in the quoted work, but the latter document includes the content of the former document virtually 1:1, but is more complete, specifically in relation to Natural Language Processing (NLP).

    We note that Evoos is reflective and self-referential, internally interactive as Holonic Agent System (HAS) and Multi-Agent System, and has an ontology, which implies adaptive or dynamic ontology.
    Our Evoos is also externally interactive as MAS and os, and has a Multi-Modal Dialogue System (MMDS) respectively Multimodal User Interface (MUI), which implies the foundation of the projects SmartKom, SmartWeb, and much more.
    In fact, the SmartKom MMDS

  • is the follow-up to the Verbmobil Machine Translation (MT) system, which was researched and developed in the years 1993 to 2000 and had no relation to ontology,

    and initially had

  • no interface agent kernel, and also
  • no ontology and no relation to SWWW at first, which seems to be added in 2002 and eventually was presented with "Less is More [...]" "In this paper we introduce the results of an effort to employ a single knowledge representation, i. e., an ontology, throughout a complete multimodal dialogue system" and "Multiplatform testbed [...]" in 2003.

    SmartWeb is based on SmartKom.
    Therefore, there was never a need to reference them as prior art in relation to our OS, but there was always a need to reference our Evoos in relation to those projects and systems.

    MUI -> also jump on bandwagon of ontology
    if one sees something, then there is no translation from Computer Vision (CV) to Natural Language Processing (NLP) as intermediate step in the brain, subsymbolic computation or processing
    Semantic Web stack includes all data interchange: KG RDF, taxonomies RDFS, ontologies OWL, rules at first RIF/SWRL later also RuleML, unifying logic, etc., but not in one (RDF) graph or overall system.
    By the way: The development of RuleML followed or even is based on Evoos. The further development of RuleML followed or even is based on our OS.
    Both MUI and SWWW, and their integration → NMP and not only NLP, 3D, VE, VR, emotion, cognition, immobot, etc., and web of them, but integration of them to our Web 3.0 and NR respectively Ontoverse (Ov)

    Furthermore, "[s]ince agents operate within frames, or structures of contexts, it follows that there may exist multiple ontologies for a given domain within the current context, and that these ontologies may easily crosscut each other" the Arrow System of the TUNES OS defines an ontological frame and an ontological relativism, which both are referenced on the webpage Links to Software, and also related to the Peer-to-Peer (P2P) Virtual Machine (VM) (P2P VM) Askemos, which we showed to be based on Evoos as well (see the related Clarifications {links missing}) and is better known from the smart contract transaction protocol.
    We also have slots in SmartKom for actions in relation to natural language understanding respectively Natural Language Processing (NLP).
    But note that the prior art, including PCL, BRM, Oggetto, Arrow System, reflective Dos TUNES OS and Aperion (Apertos (Muse)), and so on, lack many features of our Evoos and OS.

    As we explained since the start of our OS, we have thought through this idea, concept, approach, and system and all the other ideas, concepts, approaches, and systems discussed in this clarification and elsewhere to the maximum in 1999 to 2000, which required more cybernetical foundations, such as the PolyContextural Logics (PCL), fusion of linguistics, NLP, and ACL, messaging, etc., but also a graph-based approach, and allowed to eliminate whole frameworks, such as XML, when running and only use them, when required, as discussed in this clarification and elsewhere.
    The result is much more than

  • semantic knowledge representation, including
    • list-based Knowledge Base (KB),
    • table-based Knowledge Base (KB), and
    • graph-based Knowledge Base (KB) or Knowledge Graph (KG),

    and

  • Semantic (World Wide) Web (SWWW) with its system stack, including
    • KG RDF,
    • taxonomies RDFS,
    • ontologies OWL,
    • rules at first RIF/SWRL later also RuleML,
    • unifying logic,
    • etc.,

    which therefore is called onto instead of poly, ontogonal instead of polygonal, and Ontologics instead of polylogics.

    The Multiplatform is "a [distributed] multi-blackboard platform with ontology-based messaging" "based on [the Parallel Virtual Machine (]PVM[)]" with "publish/subscribe messaging on top of PVM", but a middleware, which repeated the same strategy as seen with Multi-Agent System (MAS) to circumvent our Evoos.
    But we always explain, that it is bad system design and architecture due to the additional but obsolete layers, which only increase complexity and runtime, because for example costly mappings and alignments between them and additional controls of them are more necessary (see also the comment to the quote of the document titled "Co-ordination in software agent systems" in the Clarifications of the 13th of April 2022 and the comment to the quote of the document titled "COGs: Cognitive Architecture for Social Agents" in the Clarification of the 18th of February 2022).

    We do not want to talk their workouts of our concepts down, but they spied on us all the time, but never communicated with us and eventually they never catched up or got into the lead.
    At this point, C.S. mused about the reason why The Proposal was not already accepted by Professor W. Banzhaf as a scientific work and as a diploma thesis at the faculty of the university, even in its relative short, because minimalistic, highly concentrated, and focused expression of idea and unusual form in the year 2000. But C.S. took it down again and instead began to exploit the copyright.
    Potentially, a discussion about the copyright was the reason for sharing confidential informations in the department of the university and elsewhere.
    The same holds for all other entities, including the European Commission (EC) with its multiple research projects related to our Evoos at that time around the years 2000 to 2006 and in the following years until today.
    Finally, this grotesque parody and worldwide social scandal is over, definitely, and our Society for Ontological Performance and Reproduction (SOPR) is in place.

    We quote a document, which is about the SmartWeb project and our Ontoscope (Os), and was publicated on the 6th of January 2007: "SmartWeb Handheld: Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services
    [...]

    Introduction
    The development of a context-aware, multimodal mobile interface to the Semantic Web [[... Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential.] 2003], i.e., ontologies and web services, is a very interesting task since it combines many state-of-the-art technologies such as ontology development, distributed dialog systems, standardized interface descriptions (EMMA^1, SSML^2, RDF^3, OWL-S^4, WSDL^5, SOAP^6, MPEG7^7), and composition of web services. In this contribution we describe the intermediate steps in the dialog system development process for the project SMARTWEB [[... SmartWeb: Mobile Applications of the Semantic Web.] 2004], which was started in 2004 by partners from industry and academia.
    In our main scenario, the user carries a smartphone PDA and poses closed and open domain multimodal questions [...]. [...] the user should be able to use the PDA as a question answering (QA) system, using speech and gestures to ask for information [...] stored in ontologies, or other up-to-date information like weather forecast information accessible through web services, Semantic Web pages ([i.e.] Web pages wrapped by semantic agents), or the Internet.
    The partners of the SMARTWEB project share experience from earlier dialog system projects [[... VERBMOBIL: Foundations of Speech-to-Speech Translation.] 2000; [SmartKom: Symmetric Multimodality in an Adaptive and Reusable Dialogue Shell.] 2003; [... MIAMM - A Multimodal Dialogue System Using Haptics.] 2005b]. [...] our first demonstrator system [[...] 2005a] which contains the following assets [(list points added for better understanding)]:

  • multimodality, more modalities allow for more natural communication,
  • encapsulation, we encapsulate the multimodal dialog interface proper from the application,
  • standards,
  • adopting to standards opens the door to scalability, since we can re-use ours as well as other's resources, and
  • representation.

    A shared representation and a common ontological knowledge base ease the data flow among components and avoids costly transformation processes. In addition, semantic structures are our basis for representing dialog phenomena such as multimodal references and user queries. The same ontological query structures are input to the knowledge retrieval and web service composition process.
    In the following we demonstrate the strength of Semantic Web technology for information gathering dialog systems, especially the integation of multiple dialog components, and show how knowledge retrieval from ontologies and web services can be combined with advanced dialogical interaction, i.e., system-initiative callbacks, which present a strong advancement to traditional QA systems. Traditional QA realizes like a traditional NLP dialog system a (recognize) - analyze - react - generate - (synthesize) pipeline [[... An Architecture for a Generic Dialogue Shell.] 2000]. [...] The types of dialogical phenomena we address and support include reference resolution, system-initiated clarification requests and pointing gesture interpretation among others. Support for underspecified questions and enumeration question types additionally shows advanced QA functionality in a multimodal setting. One of the main contributions is the ontology-based integration of verbal and non-verbal system input (fusion) and output (system reaction).

    Multimodal interaction sequence example
    [...]
    The first and second enumeration questions are answered by deductive reasoning within the ontological knowledge base modeled in OWL [[... How to reason with OWL in a logic programming system.] 2006] representing the static but very rich implicit knowledge that can be retrieved. The second example [...] evokes a dynamically composed web service lookup. It is important to note that the query representation is the same for all the access methods to the Semantic Web [...] and is defined by foundational and domain-specific ontologies. In case that the GPS co-cordinates were accessible from the mobile device, the clarification question would have been omitted.

    Architecture approach
    A flexible dialog system platform is required in order to allow for true multi-session operation with multiple concurrent users of the server-side system as well as to support audio transfer and other data connections between the mobile device and a remote dialog server. This types of systems have been developed, like the Galaxy Communicator [[... The Open Agent Architecture.] 2001] (cf. also [[... Organization, Communication, and Control in the Galaxy-II Conversational System.] 1999; [... Artificial intelligence in computer graphics: A constructionist approach.] 2004; Large-scale Software Integration for Spoken Language and Multimodal Dialog Systems.] 2004; [...]]), , and commercial platforms from major vendors [...]. For our purposes these platforms are too limited. To implement new interaction metaphors and to use Semantic Web based data structures for both dialog system internal and external communication, we developed a platform designed for Semantic Web data structures for NLP components and backend knowledge server communication.
    [...]
    The dialog system instantiates and sends the requests to the Semantic Mediator, which provides the umbrella for all different access methods to the Semantic Web we use. It consists of an open domain QA system, a Semantic Web service composer, Semantic Web pages (wrapped by semantic agents), and a knowledge server.
    The dialog system consist of different, self-contained processing components. To integrate them we developed a Java-based hub-and-spoke architecture [[... An integration framework for a mobile multimodal dialogue system accessing the semantic web.] 2005]. The most important processing modules in the dialog system connected in the IHUB are: a speech interpretation component [...], a modality fusion and discourse component [...], a system reaction and presentation component [...], and a natural language generation module [...]. An [Extensible MultiModal Annotation markup language (]EMMA[)] Unpacker/Packer (EUP) component provides the communication with the dialogue server and Semantic Web subsystem external to the multimodal dialog manager and communicates with the other modules of the dialog server, the multimodal recognizer, and the speech synthesis system.
    [...]

    Ontology representation and web services
    [...]
    The ontological infrastructure of SMARTWEB, the SWIntO (SMARTWEB Integrated Ontology), is based on an upper model ontology realized by merging well chosen concepts from two established foundational ontologies, DOLCE [[...] 2002] Ontology representation and web services [[...] 2001], in a unique one: the SMARTWEB foundational ontology SMARTSUMO [[...] 2004]. Domain specific knowledge [...] is defined in dedicated ontologies modeled as sub-ontologies of the SMARTSUMO. The SWIntO integrates question answering specific knowledge of a discourse ontology (DISCONTO) and representation of multimodal information of a media ontology (SMARTMEDIA). The data exchange is RDF-based.
    [...]
    Information exchange between the components of the server-side dialog system is based on the W3C EMMA standard that is used to realize containers for the ontological instances representing, e.g., multimodal input interpretations. SWEMMA is our extension to the EMMA standard which introduces additional Result structures in order to represent components output. On the ontological level we modeled an RDF/S-representation of EMMA/SWEMMA.
    [...]

    Multimodal access to web services
    To connect to web services we developed a semantic representation formalism based on OWL-S and a service composition component able to interpret an ontological user query. We extended the OWL-S ontologies to flexibly compose and invoke web services on the fly, gaining sophisticated representation of information gathering services fundamental to SMARTWEB.
    [...] The composition engine follows a plan-based approach as explained, e.g., in [[... Automated planning.] 2004]. It infers the initial and goal state from the semantic representation of the user query, whereas the set of semantic web services is considered as planning operators. [...]
    [...] Text-based event details, additional image material, and the location map are semantically represented (the map in MPEG7) and returned to the dialog engine.

    Semantic parsing and discourse processing
    Semantic parsing and other discourse processing steps are reflected on the interaction device as advanced user perceptual feedback functionality. [...].

    Language understanding [...] and text generation [...]
    [...]

    Multimodal discourse processing [...]
    An important aspect of SMARTWEB is its context-aware processing strategy. All recognized user actions are processed with respect to their situational and discourse context. A user is thus not required to pose separate and unconnected questions. [...] The interpretation of user contributions with respect to their discourse context is performed by a component [...] [[... an integrated approach to multimodal fusion and discourse processing] 2005]. The task [...] is to integrate the verbal and nonverbal user contributions into a coherent multimodal representation to be enriched by contextual information, e.g., resolution of referring and elliptical expressions.
    The basic architecture [...] consists of two interweaved processing layers: (1) a production rule system [...] that is responsible for the reactive interpretation of perceived monomodal events, and (2) a discourse modeler [...] that is responsible for maintaining a coherent representation of the ongoing discourse and for the resolution of referring and elliptical expressions.
    [...]

    Reaction and presentation planning for the Semantic Web
    [...] is based on a finite-state-automaton and information space (IS). Our new approach differs from other IS approaches (e.g. [[... Modelling grounding and discourse obligations using update rules.] 2000]) by generating IS features from the ontological instances generated during dialog processing [[... Towards combining finite-state, ontologies, and data driven approaches to dialogue management for multimodal question answering. 8th to 10th of October] 2006]^11 [Bingo!!!]
    ^11 The IS state is traditionally divided into global and local variables which make up the knowledge state at a given time point. Ontological structures that change over time vastly enhance the representation capabilities of dialog management structures, or other structures like queries from which relevant features can also be extracted. [Bingo!!!]
    Since the dialog ontology is a model for multimodal interaction, multimodal MPEG7 result representations, multi-modal result presentations, dialog state, and (agent) communication with the backend knowlege servers, large information spaces can be extracted from the ontological instances describing the system and user turns in terms of special dialog acts - to ensure accurate dialog management capabilities. [Bingo!!! ...] The IS approach to dialog modeling comprises, apart from dialog moves and update strategies, a description of informational components (e.g. common ground) and their formal representations. Since [...] the formal dialog specification consists of ontological structures as Semantic Web data structures, a formal welldefined complement to previous formal logic-based operators and Discourse Representation Structures (DRS) is provided. However, the ontological structures resemble typed feature structures (TFS) [[...] 1992] [...]. [...]
    It is important to mention that dialog reaction behaviour within SMARTWEB is governed by the general QA scenario, which means that almost all dialog and system moves relate to questions, follow-up questions, clarifications, or answers. As these dialog moves can be regarded as adjacency pairs, the dialog behaves according to some finite state grammar for QA, which makes up the automaton part (FSA) [...]. The finite state approach enhances robustness and portability and allows to demonstrate dialog management capabilities even before more complex IS states are available to be integrated into the reaction and presentation decision process. [...]

    Dialog component integration
    In this section we will focus on issues of interest pertaining to the system integration. In the first instance dialog component integration is an integration on a conceptual level. All dialog manager components communicate via ontology instances. This assumes the representation of all relevant concepts in the foundational and domain ontologies - which is hard to provide at the beginning of the integration. In our experience, using ontologies in information gathering dialog systems for knowledge retrieval from ontologies and web services in combination with advanced dialogical interaction is an iterative ontology engineering process, which requires very disciplined ontology updates, since changes and extensions must be incorporated into all relevant components. [...]
    We first built up an initial discourse ontology [...]. In addition, an ontological dialog act taxonomy has been specified [...]. [...] the mapping between semantic queries and the ontology instances in the knowledge base. In our system, the discourse (understanding) specific concepts have been linked up with the foundational ontology and, e.g., the [domain] ontology, and the semantic parser only builds up interpretations with SWIntO[ntology] concepts. [...]
    [...] ontological representations offer a framework for gesture and speech fusion when users interact with Semantic Web results such as MPEG7-annotated images and maps. Challenges in multimodal fusion and reaction planning can be addressed by using more structured representations of the displayed content, especially for pointing gestures [...]. We extended this to pointing gesture representations on multiple levels in the course of development, to include representations of the interaction context, the modalities and display patterns used, and so on. [Bingo!!!]
    The primary aim is to generate structured input spaces for more context-relevant reaction planning to ensure naturalness in system-user interactions to a large degree. [...] The challenge of integrating and fusing multiple input modalities can be reduced by ontological representations, which exist at well-defined timepoints, and are also accessible to other components such as the semantic parser, or the reaction and presentation module.

    Conclusion
    We presented a mobile system for multimodal interaction with an ontological knowledge base and web services in a dialog-based QA scenario. The interface and content representations are based on W3C standards such as EMMA and RDF. The world knowledge shared in all knowledge-intensive components is based on the existing ontologies SUMO and DOLCE, for which we added additional concepts for QA and multimodal interaction in a discourse ontology branch.
    We presented the development of the second demonstrator of the SMARTWEB system which was successfully demonstrated in the [summer] of [...] 2006 [...]. [...]
    [...] Support for inferential, i.e., deductive reasoning, complements the requirements for advanced QA in terms of information- and knowledge retrieval. Integrated approaches as presented here rely on ontological structures and deeper understanding of questions, not at least to provide a foundation for result provenance explanation and justification. Our future plans on the final six month agenda include dialog management adaptations via machine learning and collaborative filtering of redundant results in our multi-user enviroment [...]."

    Comment
    Everywhere integration and gap bridges and 27 times the term ontological. Guess why and just remove the terms related to Semantic Web and then compare the resulting basic properties and functionalities with the field of Humanistic Computing (HC or HumanC), the Arrow System (AS), and our Evolutionary operating system (Evoos). Ooops, its all already included in our Evoos and because our Evoos is a cybernetic self-portrait, self-augmentation, and self-extension, we have what? Indeed, a legal issue with all related projects, works, systems, and so on.
    We do not know what was shown at the event in the summer of 2006.
    The work was publicated on the 6th of January 2007 at the International Conference on Artifical intelligence for Human Computing 2007 (ICMI'06/IJCAI'07).
    There is a little confusion about its date of publication, because the properties of the file itself shows that the document was created and edited the last time on the 6th of November 2006, which was just a little less than 2 weeks after the first publication of our OS by the upload of the content of the website of OntoLinux and 3 days before the official start of our OS on the 9th of November 2006.
    Furthermore, the International Workshop on Artificial Intelligence for Human Computing at the IJCAI was a joint event held together with the 8th International Conference on Multimodal Interfaces (ICMI), which was held on the 3rd of November 2006, and hence it was not clear if the quoted work was already presented at the ICMI'06 and not only at the IJCAI'07. But a look at the preface of the document titled "ICMI'06/IJCAI'07: Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing" clarifies the situation: "Preface
    This volume in the Lecture Notes of Artificial Intelligence represents the first book on human computing. We introduced the notion of human computing in 2006 and organized two events that were meant to explain this notion and the research conducted worldwide in the context of this notion.
    The first of these events was a Special Session on Human Computing that took place during the Eighth International ACM Conference on Multimodal Interfaces (ICMI 2006), held in Banff, Canada, on November 3, 2006. The theme of the conference was multimodal collaboration and our Special Session on Human Computing was a natural extension of the discussion on this theme. We are grateful to the organizers of ICMI 2006 for supporting our efforts to organize this Special Session during the conference.
    The second event in question was a Workshop on AI for Human Computing organized in conjunction with the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), held in Hyderabad (India), on January 6, 2007. The main theme of IJCAI 2007 was AI and its benefits to society. Our workshop presented a vision of the future of computing technology in which AI, in particular machine learning and agent technology, plays an essential role. We want to thank the organizers of IJCAI 2007 for their support in the organization of the Workshop on AI for Human Computing."
    Furthermore, one reference was presented at a conference that was on 9th to 10th of October 2006 and due to its complexity we do not think that it was integrated in only 2 weeks until the end of October 2006, when we started our OS inofficially.

    Not surprisingly anymore at that time was the odd timing of the developments and publications in this field, which matches our activities of research and development, and also creations and publications.
    Also note that our Evoos also has a Multimodal User Interface (MUI), as can be easily seen with the assignment of chapter of The Proposal (see also the related discussion in the Clarification of the 18th of February 2022).

    EMMA: Extensible MultiModal Annotation markup language [] W3C Working Draft 11 August 2003

    The Semantic (World Wide) Web (SWWW) is based on for example RDF and RuleML, which are based on the triple store, which again is based on the Binary Relation Model (BRM).
    Our Evoos has ontology, BRM, (classic and non-classic) Logic Programming (LP), common Ontologic Knowledge Base (OKB), including ontology-based or ontological KB (oKB), Multimodal User Interface (MUI), etc..

    Furthermore, we have referenced the Galaxy Communicator in the section Natural Language Processing of the webpage Links to Software. Before, SmartKom used the Multi-BlackBoard (MBB) system called Multiplatform, because it was the ultimate state-of-the-art. See the document titled "More on Less: Further Applications of Ontologies in Multi-Modal Dialogue Systems" quoted before.
    We had to laugh loud when we saw that bandwagon jumping of them another time due to our Evolutionary operating system (Evoos) Architecture (EosA) and Ontologic System Architecture (OSA), specifically because we already integrated the CHemical Abstract Machine (CHAM) for integrating a BlackBoard (BB) system, which is an Associative Memory (AM) or Associatively-Addressable Memory (AAM), as well. We wil have the next loud laugh, when they jumped back due to our Space-Based technologies (SBx).
    Also note that we always said that we integrated the Galaxy Communicator in accordance with our EosA and OSA, which has been copied here without referencing our Evoos and OS.

    In fact, we wanted something like Java Jini, which is based on a

  • Tuple Space (TS) system, which again is a BlackBoard (BB) system, which again is an Associative Memory (AM) or Associatively-Addressable Memory (AAM), and
  • Multi-BlackBoard (MBB) system,

    because our OS integrates and even fusions a Tuple Space (TS), a Binary Relation Model (BRM) (see TUNES OS and Arrow System once again), a triple store, as also used with RDF of the Semantic (World Wide) Web (SWWW) (see also the Ontologic File System (OntoFS) component, note that triple store is also used for AI), and the various triplespaces for the 3D GUI and our semantic middleware.
    Note once again that

  • simple sentence,
  • RDF (Subject, Predicate, Object) (Subject, Property, Object) triple, and
  • Binary-Relational Model (BRM) respectively relationship (subject, relation, object) triple,

    and

  • main sentence,
  • OWL and other Markup Languages (MLs),
  • tuple, and
  • polygonal database model tuple

    are the same and

  • tuples can be handled as triples and
  • triples can be handled as respectively are 3-tuples,

    so that we get Natural Language Processing (NLP) and Natural Language Understanding (NLU), but also Natural Multimodal Processing (NMP) and Natural Multimodal Understanding (NMU) virtually for free, because it is already inherent in the design respectively structure of the foundational stores and their models, as explained in the past.
    Eventually, we generalized the whole approaches a further time and use SWWW technologies and semantic MLs for standardized communication and messaging and storing (e.g. an ontology can be used for a database schemata).

    But it does not stop here, because Agent Communication Language, messaging and communication, and Multi-Agent Belief Revision (MABR), and also human language have been aligned already.
    See the documents titled "SemanticAgent a Platform for the Development of Software Agents" and "The Language of Machines", which are based on our Evoos, as shown once again herein, and were publicated in 2003.
    This means the

  • human can talk with human
  • human can talk with the machine,
  • machine can talk with the human, and
  • machine can talk with machine

    automatically recognized by human and machine as part of a Language of Human and Machine, or Universal Networking Language (UNL) and Universal Communication Language (UCL).
    See also the other Clarifications of July 2021, December 2021, February 2022, and April 2022.

    But it does not stop here, because we did so with all modalities as part of our Natural Multimodal Processing (NMP), and Bridge from Natural Intelligence (NI) to Artificial Intelligence (AI).

    But it does not stop here, because we fusioned all realities. See for example the movie "Tron".

    But it does not stop here, because triple stores are implemented on the basis of hash tables, specifically multi-dimensional extendible hashing and dynamic hashing, which also means as some kind of

  • polygonal data(base) model and management system, and
  • Associative Memory (AM), specifically Content-Addressable Memory (CAM) and Content-Addressable Storage (CAS), and tuple spaces,

    and Peer-to-Peer (P2P) Computing (P2PC) is based on the Distributed Hash Table (DHT) data structure.
    And our polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS) allows the implementation of a global triple store based on BMR and DHT respectively the integration of both, and a global tuple store based on polygonal data model, which already are inherently semantically by design.

    But it does not stop here, because we also wanted Scalable Distributed Tuplespace (SDT), because the Multiplatform is not scalable, and with our original and unique polylogarithmically scalable and synchronizable Distributed Computing (DC) or Distributed System (DS) we are also able to implement a global Associative Memory (AM), global triple store, Multiple Tuple Space (MTS) system, Multi-BlackBoard (MBB) system, Content-Addressable Memory (CAM), Content-Addressable Storage (CAS), and also Global Grid, Grid 2.0, or Universal Space respectively Ontologic Net (ON) and Content-Addressable Network (CAN), Global Brain, Global Brain 2.0, or Universal Brain Space respectively Ontologic Web (OW).
    Those plagiarists never had a clue since 1998 and there was nobody, who created something similar to our OW.

    Obviously, our Evoos was also the inspiring work and blueprint for the projects SmartKom and SmartWeb, and the whole field of Human Computing, as we also showed in relation to Derrida's Machines (see the quotes and comments).
    For sure, we do know the prior art very well and therefore we are able to show that none of them challenges the originality and uniqueness of our work of art described in The Proposal.

    Therefore, our point of view is also not wrong that the SmartKom based on Semantic Web and the SmartWeb were an attempt to rescue prior art, specifically the works listed in the reference of the quoted document. In fact, the companies and shareholders of the DFKI ...

    The connection of ontologies for the MultiModal Dialogue System (MMDS) and the domain is quite straightforward.
    At this point, we always mention our overall integration and reduction of the system stack, including Natural Multimodal Processing (NMP) directly in the computing and networking, and the use of XML only as an intermediate means.
    The question is what and how much of our Evoos was publicated as the Arrow System (see the quotes and comments above) and could be viewed as alternative source of inspiration for the other works disucussed in this and other clarifications and investigations on our website OntomaX.

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part I
    Cloning the Natural - and other Fragments

    A fundamental theory of the natural
    If there is anything left in this world we live which is still untouched and natural then it is the naturalness of the natural numbers - and nothing else.
    [...]
    And why not Leopold Kronecker?
    "God made the integers, all the rest is the work of Man."
    As Natural as 0,1,2
    Philip Wadler. [...] 20 November 2002.
    "Whether a visitor comes from another place, another planet, or another plane of being we can be sure that he, she, [they,] or it will count just as we do: though their symbols vary, the numbers are universal. The history of logic and computing suggests a programming language that is equally natural. The language, called lambda calculus, is in exact correspondence with a formulation of the laws of reason, called natural deduction. Lambda calculus and natural deduction were devised, independently of each other, around 1930, just before the development of the first stored program computer. Yet the correspondence between them was not recognized until decades later, and not published until 1980. Today, languages based on lambda calculus have a few thousand users. Tomorrow, reliable use of the Internet may depend on languages with logical foundations. "
    [...]
    [...] the exclusive nature of the natural numbers will boil down to a very mundane activity in our cultural, that is, artificial world.
    The naturality of the natural number system, as we know it, will be entangled in an activity of increasing artificiality of multitudes of natural number systems.
    Also there is no culture without numbers, numbers are not cultural, but natural. They are the very nature in/of our culture. To transform this situation will change radically what we will understand by culture. The most advanced development of this classical arithmetical trance of naturality is still the global movement of digitalism and its technology.
    In other words, my old question is still virulent: What's after digitalism? (ISEA '98)

    Natural number series
    Natural numbers as models of fundamental abstract systems
    "A first attempt at a theory to describe numbers begins with a fundamental abstract type called nat0 as follows:
    [...]
    [...] the signature contains an arity-zero operation called zero and an arity-one operation called suc. These operations generate the following infinite series of expressions:
    zero, suc(zero), suc(suc(zero)), suc(suc(suc(zero))), ...
    in their Herbrand universe of the type.
    The only well-formed applications of these operators are the constant zero itself or succesive applications of the suc function beginning with zero.
    [...]" Michael Downward, Logic, p. 181
    This is well known, well established and usefull and for some strange reasons it is called word algebra. And it offers a stable fundament for the natural number series and all other types of linearly ordered series, too. At least there are enough people who strongly believe in that.
    As we see, and will see in the following, natural numbers, despite on being natural, are not naturally accessible in mathematics. They need all sorts of sophisticated notational systems and interpreting mechanisms.
    The Stroke Calculus approach emphasis the aspect of step-wise construction by rules applied to an initial object. This shows us more the internal structure of the type of construction.
    The Set Theory approach develops an understanding of natural numbers out of a special set theoretical operation, bracket-operation for sets, based on a logical definition of the empty set which in itself is not very self-evident.
    In contrast, the Category Theory approach emphasis on the external relationships of the constructors and gives us an explication of the intuition of natural numbers up to isomorphism.
    [...]

    Semiotics of natural numbers

    Natural numbers in set theory
    [...]

    [...]

    Natural numbers and computability
    [...]

    Cloning naturality
    Today it seems that there is no reason to not to clone and replicate the naturality of the natural numbers with their ultimate Herbrand universe.
    [...]
    Desedimenting artificiality
    [...] we have a chance for a desedimentation and deliberation of the numbers from the terrorism of linearity to a free play of writing opening up not only a multi-linearity of numbers but a "living tissue".
    This idea is easily supported by Aristotle's condemnation and fight against Platonist and Pythagorean ideas of numbers.
    [...]

    The conceptual graph of the abstract object nat0
    [...]

    Unicity, Intuition and Explication
    Aspects of the interplay between intuition and formalism
    Intuition is deeper than formalism
    "Hower much we would like to 'mathematize'the definition of computability, we can never get completely rid of the semantic aspect of this concept. The process of computation is a linguistic notion (presupposing that our notion of language is sufficiently general); what we have to do is to delimit a class of those functions (considered as abstract mathematical objects) for whichexists a corresponding linguistic object (a process of computation)." Mostowski, Thirty Years of Foundational Studies, 1966, p. 33
    "Truth is invariant under change of notation." (Goguen[)]
    [...]
    Writing beyond intuition and formalism
    What's the base of intuition?
    Egological foundation of intuition (Husserl, Brower)

    Dissemination: Introducing the proemial relationship
    There are many ways of combining abstract objects or institutions. [...]
    The idea of dissemination tries to explicate and formalize a quite different intuition of combining institutions which is not producing diversity and multiplicity by combining a basic system as a product or sum or whatever construction but introduces multiple differences in the very concept of the basic system itself. After this construction a polylogical or polycontextural system can be combined in many ways. This idea of multitudes of basic differences in the notion of formality, taken seriously, is in fundamental contrast to the existing concepts of formality in mathematics. Obviosly, these multitudes are more fundamental than all types of many-sorted theories, typed logics or pluralities of regional ontologies, domains and contexts.

    The idea of proemiality A very first step in this direction was made by the philosopher Gotthard Günther with his idea of a "proemial relationship" introduced in his paper "Cognition and Volition" (1970) about a Cybernetic Theory of Subjectivity.
    "In order to obtain a general formula for the connection between cognition and volition we will have to ask a final question. It is: How could the distinction between form and content be reflected in any sort of logical algorithm if the classic tradition of logic insists that in all logical relations that are used in abstract calculi the division between form and content is absolute? The answer is: we have to introduce an operator (not admissible in classic logic) which exchanges form and content. In order to do so we have to distinguish clearly between three basic concepts. We must not confuse
    a relation
    a relationship (the relator)
    the relatum.
    The relata are the entities which are connected by a relationship, the relator, and the total of a relationship and the relata forms a relation. The latter consequently includes both, a relator and the relata.
    "However, if we let the relator assume the place of a relatum the exchange is not mutual. The relator may become a relatum, not in the relation for which it formerly established the relationship, but only relative to a relationship of higher order. And vice versa the relatum may become a relator, not within the relation in which it has figured as a relational member or relatum but only relative to relata of lower order.
    [...]
    We shall call this connection between relator and relatum the 'proemial' relationship, for it 'pre-faces' the symmetrical exchange relation and the ordered relation and forms, as we shall see, their common basis."
    "Neither exchange nor ordered relation would be conceivable to us unless our subjectivity could establish a relationship between a relator in general and an individual relatum. Thus the proemial relationship provides a deeper foundation of logic as an abstract potential from which the classic relations of symmetrical exchange and proportioned order emerge.
    It does so, because the proemial relationship constitutes relation as such; it defines the difference between relation and unity - or, which is the same - between a distinction and what is distinguished, which is again the same as the difference between subject and object.
    It should be clear from what has been said that the proemial relationship crosses the distinction between form and matter, it relativizes their difference; what is matter (content) may become form, and what is form may be reduced to the status of mere "materiality"."
    "We stated that the proemial relationship presents itself as an interlocking mechanism of exchange and order. This gave us the opportunity to look at it in a double way. We can either say that proemiality is an exchange founded on order; but since the order is only constituted by the fact that the exchange either transports a relator (as relatum) to a context of higher logical complexities or demotes a relatum to a lower level, we can also define proemiality as an ordered relation on the base of an exchange. If we apply that to the relation which a system of subjectivity has with its environment we may say that cognition and volition are for a subject exchangeable attitudes to establish contact but also keep distance from the world into which it is born. But the exchange is not a direct one.
    If we switch in the summer from our snow skis to water skis and in the next winter back to snow skis, this is a direct exchange. But the switch in the proemial relationship always involves not two relata but four!" Günther

    Some explanations of the idea of proemiality
    [...]
    [...] Remember: We must not confuse a relation, a relationship (the relator), the relatum.
    [...]
    After having introduced the idea of proemiality it would be possible to formalize it further and to develop a preliminary theory of proemiality, also sometimes called chiastics or theory of mediation.
    The main thesis, therefore, is that proemiality offers a mechanism of combining institutions which doesn't belong to the universe of combining categories.
    This mechanism of combining institutions, e.g. distribution and mediation, is fundamentally different from the classical ones. Despite of this difference this strategy is in no contradiction or opposition to the known principles of combining systems of logics.
    [...]
    Don't confuse the exchange of relator and relatum of a relation in the mechanism of the proemial relationship with the superposition of relator and relation in relational logics. There is no problem to apply a relator, or a operator or a functor to the result of a relation or operation or function as e.g. in recursion theory or in meta-level hierarchies.
    Metaphor
    If we proemialize the linguistic subject-object-relation of a sentence we shouldn't hesitate to be strictly structural.
    The example is borrowed from Heinz von Foerster.
    "The horse is gallopping" (Das Pferd gallopiert), the interchanged sentence can only be "The gallop is horsing" (Der Gallop pferdet).
    Nobody supposed that we are doing analytic philosophy.

    Proemiality and Architectonics
    [...]
    An operator as an operand is an operand (of another operator)
    Metaphors
    I as myself and I as another.
    The other as itself and the other as another (e.g. myself).
    [...]

    Proemiality and Heterarchy in a UML Framework
    To give a more transparent modeling of the proemial relationship it maybe helpful to set the whole construction and wording into an UML diagram and to use the modeling of heterarchy worked out by Edward Lee as a helpful tool to explicate proemiality in terms of UML modeling.
    Also the proemial relationship is not restricted to ontology and the distribution of hierarchical ontologies in a heterarchic framework and despite the fact that UML has no mechanisms of category change, metamorphosis and mediation it seems to be a helpful exercise to find a correspondence between the UML heterarchy diagram and the construction of proemiality which is more based on elementary terms of relationality. The heterarchy diagram is a class diagram which models the static structure of the system. Proemiality has, also it is fundamentally dynamic, its static aspects. It is this static aspect we can model with the help of the UML heterarchy diagram.
    A further step of UML modeling of proemiality will have to involve more dynamic models like interaction and activity diagrams.
    [...]

    An example: "Beyond Substance and process"

  • 1 Metarules
    "Metarule [Cellular Automata (]CAs[)] introduce the required openness by postulating a hierarchy of CA rules. Each CA at a particular level in the hierarchy has a finite lattice, a finite number of states and a finite number of rules. [...]"
  • 2 Beyond Substance and Process ...
    "One possible objection to this scheme is that it is ontologically dualistic at the lowest level in the hierarchy (states and rules) and ontologically monistic at all other levels (rules and metarules). This problem may be overcome by extending the framework to a bidirectionally-infinite hierarchy [...] Such a framework replaces the dualistic ontology of state and rule, and their corresponding physical counterparts, substance and process, with a monistic ontology based on an instance of a more general kind."
  • 3 Another Approach
    [...] are using the set of rules from one level of the hierarchy of cellular automata to define the automata on a next level. This approach produces an exponentation of the quantitative complexity of the apparatus and accepts the basic rules of identity of the objects of the CA at each level.
    Another, more holistic approach, is given, with the morphogrammatic abstraction applied to the set of the rules. The new level is then defined by morphograms which are beyond semiotical identity.
    [...]
    The proemial hierarchy of polycontextural logics
    [...]
    This hierarchy of logical orders are not to be confused with the hierarchy of operators and operations as in the theory of types or the meta-language concept. The exchange happens between operators and operands and not between operators and the operation as a result of the application of operators to their operands like in recursive number theory or recursive formulas.
    Presupposing the terminology of operators and operands or any other dichotomic order, e.g. relator/relatum, rule/statement, the PCL framework can be put into a proemial order which can be seen as a new type of hierarchical order [...].
    [...]
    [...] The morphogrammatic system itself has its foundation in itself, because the morphograms are the (re)presention of their own operators. Here the distinction of operator and operand is in some sense obsolete.
    From a proemial point of view there is no need for an infinitary approach of levels. There is also no need for a monistic ontology. The proemiality of operator and operand or of rules and states is neither an operand nor an operator but the foundation of both.
    From multi-level to one-level and zero-level ontologies
    [...]
    Proemiality is an interlocking mechanism of typed and zero-typed languages. A zero-typed (keno-typed) language is not a non-typed or a one level typed language but a language beyond the distinction of operator and operand as the base of types and typed languages.

    Complementarity of dissemination and togetherness
    [...]
    In this sense, dissemination is a process of disseminating single systems and at the same time it is the wholeness, the togetherness of the disseminated systems. This is also included in the notion of dissemination as a process of distribution and mediation of systems. Dissemination is always both: multitude and wholeness. [This sound like being holonic.]

    Combinatorics of chiastic changes of categories
    [...]

    Metamorphosis or Proemial combinations in abstract objects
    [...]

    Modularity and Metamorphosis
    [...]

    Chiasms, metamorphosis and super-operators
    [...]
    ### Chaotic Logics
    Chaotic logics are not the logics of chaos but the logics of change.
    Change in chaotic systems is not a continuos process but the switch from one mode to another mode of a system by some changes of the states of the system.
    Chaotic logics are the logics of interacting logical systems.
    Changes in chaotic logics are modeled by transcontextural jumps from one system to another system and are defined in sharp contrast to the intracontextural steps of the expansion rule in a singular system. Transjunctional jumps don't exclude the possibility to stay in the primary system at the same time of the jump.
    Cybernetic Ontology
    Order from Noise.
    [...]
    Translations, Goguens Semiotic Algebras
    It turns out that correct translations are conservative metamorphosis.
    Maybe the main problem of machine translation is just this decision, to start with conservative translations and to try to model common sense texts, which are full of games of violating this conservativity, with this restricted approach. In other words, conservative translations are based on disambiguated and context free semantics. A case which is very artificial and doesn't match natural language at all.
    A conservative example: conflicts in the tree of data objects
    All programming languages are based on very strict and stable conceptual structures. If the data objects are introduced as an ordered system like the "tree of data objects", this structure will never be changed in the process or execution of a program (Programmablauf). If something would be changed in this order it would automatically produce serious conflicts.
    There is an easy way of producing conflicts in a dialogical system, if e.g. L1 declares A as a simple object and L2 declares simultaneously A as a complex object, that is as a structure. Obviously it is possible, in the polycontextural approach, to model this conflict and to resolve it in another logical system, say L3, this without producing a metasystem subordinating L1 and L2.
    Furthermore, the conflict has a clear structure, it is a metamorphosis of the terms "simple object" in L1 and "structure" in L2. This metamorphosis is a simple permutation between sorts over two different contextures based on the chiastic structure of the mediation of the systems. But it respects the simultaneous correctness of both points of view in respect of being a "simple object" and being a "structure". In this sense it can be called a symmetrical metamorphosis.
    Today computing is often characterized by its interactivity. But the programming languages have not changed to respond to this situation. They are still, in principle, monologic.
    A further example of an interchange between programming languages would be the chiasm between data objects and control structures.
    A very shy implementation of this interlocking mechanism, with far reaching consequences, is at the basis of all artificial intelligence attempts, the internal difference and possible ambiguity in LISP between data and programs ruled by the QUOTE/EVAL function.
    These examples should not be confused with contradictions arising by a conflict in attributes between different informations. This implies a logical and linguistic level of communication and doesn't touch the categorical framework of interaction.
    After Wegner, interactions are paraconsistent, or at least belong to a paraconsitent type of logic. This maybe true on a linguistic-logical level, but it is not in correspondence with a more achitectonic and chiastic view of interactivity.

    A simple typology of chiasms
    [...]

    Proemiality between structural and processual understanding
    [...]

    Category Theory - and beyond?
    [...]

    Dissemination of natural objects
    [...]

    On interactivity between cloned systems
    But we can surely suppose that these clones (or replications) will start to interact with each other and begin to produce slightly more interesting series of expressions than the purely isolated parallel ones which turn out to be a special case of the modi of interaction. Even more, the single fundamental system looks like a very special reduction of the interacting system, namely the case of the modus of interacting with itself.
    Some evidence shines up that the interaction modi between systems are far more fundamental than the single systems in themself. Also I started with the idea of cloning formal systems, this start is ruled at first by the strategy of inhereting from the classical isolated system as much of its genotype, methods and constructions, as possible. [Bingo!!!]
    What should a multi-agent theory or formalism look like?
    Multi-agent systems add another dimension to agent-oriented systems. Can single-agent formalisms be extended for multi-agent systems? What are the extra features of such systems that must be addressed, and how might this be done? [This sounds like a Holonic Agent System (HAS).]
    [...]

    Reflectional architectures of interactivity
    [...] These replications have to modify themselves to be able to interact together. They have to internalize, that is, to realize in themselves, the complex interactional structure of their environment. The new metaphor which is leading my studies of the construction of internalization (introspection, reflection and interaction), should be the metaphor of togetherness.
    Togetherness may be realized for each single system by the complex reflectional structure of
    auto-referentiality,
    hetero-referentiality and
    self-referentiality.
    This structure is well known from Hegel and further explained by Günther in his Cybernetic Ontology but as much it is used for sociological notions it is not formalized and implemented at all. A similar but more computer scientific approach can be found in the works of Computational Reflection (Smith, Maes, Sloma, Kennedy).
    [...]
    This idea has to be considered in the process of inheriting methods and techniques from the classical formal constructions. In abandoning the superiority of alpha-nummerical notational systems with their atomizity and linearity the tabular notational texture of kenogrammatics and morphogrammatics is to be considered for the realization of interacting systems.
    Now, our cloned systems, that is, the distributed and mediated systems, are to understand each as an intersection of a multitude of contextures at a logical locus. [...] Interaction is a interlocking mechanism of these reflectional architectures. In more metaphorical terms, architectonics is the home of togetherness. [...]
    As a consequence of the complex architectural structure of the system their objects are inheriting this structure from the very beginning. [...]
    [...]

    Proemiality and reflectional architectures
    As far I have introduced two fundamental notions for poly-contextural systems: the notion of proemiality (chiastics) and the notion of reflectional architecture. How do this two concepts interact together?
    To answer this question I have to give an explication of the arrows in the diagram.
    The internal structure of the simple arrows is explained by the proemial structure, interlocking mechanism, between the primary and the mirrored contextures at their logical locus. Between contexture1 and contexture2 each in itsd functionality as a primary contextuer we have an exchange relation. The same holds for the mirrored case, contextures1 and contexture2. Both are in an exchange relation. Internally, for each compound contexture we observe an order relation between the original and the mirrored situation. And finally, between the same distributed contextures we have the case of categorical coincidence. All relations together are defining the situation of the proemial relation between four objects.
    In more linguistic terms we could speak about the chiasm of I and Thou. Obviously this may sound similar to the "Mirror stage" as it is described by Jaques Lacan.
    Proemiality and interactionality are together in a co-creative interplay. This constitutes the domain of togetherness.

    Towards architectures in computational reflection
    The importance of architectures for artificial intelligence was early discovered by Sloman (1986) and Steels (1986). [...] What was hidden to the booming second-order cybernetics literature, especially in Germany, about reflection, i.g., self-referential systems and their circular logics, was exactly the notion of architecture for self-reflecting systems. Despite some nice drawings [...] and similar constructions [...] there was no awareness of epistemic architectures at all. [...]
    In hard contrast to this anti-internalism, Gotthard Günther had proposed a complex ontology of epistemic "internalism" beyond the simple destinction of inside/outside, developing logical differences in the inside, and in the outside too. But neither the theories of reflection, introspection and representation of Smith and Maes, nor Günther's complex ontologies offered a hint how to implement these architectures in a mathematical and logical setting. The work of Sloman which I have read much later still remains in the same descriptive vagueness, despite of his different implementations. [...] fundamental ontology of love [...]
    [...]

    Polycontextural logics and reflection
    Polycontextural logics starts with the basic but simple idea that each rational agent has its own point of view to its world and that therefore each agent has its own logic.
    To each agent corresponds a classical logic that determines the logification of its knowledge and experiences collected in a domain, called contexture. The next basic idea of the concept of polycontextural logic is given by the stipulation that these rational agents are not isolated from each other but are in a network of interaction. Polycontextural logic is describing the structural rules of interactions between rational agents.
    The network of interaction of rational agents is modeled in the concept of Polycontexturality. This concept is independent of the notions of information and communication and other cybernetic and computer scientific terms. The proemial relationship or chiasm describes the general structure of interactivity between agents. The proemial relationship is introduced by the interlocking mechanism of four types of relations: order, exchange, coincidence and place.
    Interactivity and co-operation between different rational agents is reflected in their logical connectives: junctions (plus negations) and transjunction. The junctional connectives are ruling the intra-contextural, the transjunctions the trans-contextural situations.
    [...] the world of interacting agents is cut twice, one cut is between agents and their world and an other is between agents and their models of the world. The simple cut between agent and world is the cartesian cut and its logic corresponds to the bivalent classical logic.
    The fact of the double cut, the epistemical situation for interacting agents, forces to non-classical logics in which the process of cutting itself can be modeled.
    [...]
    It seems to be that this is the main difference between trans-classical and classical understanding, modeling, formalizing and constructing of artificial interacting systems.
    In this sense it seems not to be sufficient enough to combine logics, to mix different logics and different methods to obtain a logical system for multi-agent robots [...].

    Pfalzgraf: Fibring logics and multi-agent robotics
    Today's concepts of fibering or combining logical systems does not include the reflectional aspects of interaction and togetherness. Basically its ontology remains mono-contextural. The problem of multitudes is shifted form the ontology or the universe of the underlying logic to its sorts in a many-sorted logic. Multitudes, plurality is therefore based in singularity and unizity as well explained in the theory of institutions. To model reflectional interacting systems we have to introduce a polycontextural ontology with all the consequences for logics, arithmetics, semiotics and so on.

    Introducing the metaphor of a tissue of coloured logics
    [...]
    Trans-classical logic is aimed to model the situation of rational reasoning between different agents where each agent has its own logic, that is, its own point of view in respect to his world. Therefore trans-classical logic reflects the world from a multitude of different logical points of view. Each locus has its own "mathematicized" formal apparatus, its own mathematical formal logic. As a consequence, the monolitical or erratic concept of world disappears as a very special case of disambiguity in a dynamic multi-verse.
    The mathematicization of a world including a multitude of different logical loci, points of view, is obviously different from the more abstract model of the classical mono-logical mathematicization of the world.
    Maybe, the classical model reflects an ideal world or even investigates "the principles of reasoning for perfect worlds" (Fitting) trans-classical logic reflects the rational principle of a conflicting, interacting, co-operating world, where the participants of these interactions creates together their own worlds in a co-creating manner. In this sense reasoning and modeling are not structures but actions.
    The new concept of trans-classical logic as a complex logic of a multitude of points of view does not introduce some "shades of grey" between the strictness of the classical concepts. There is now fuzziness here. It introduces something different, each (classical) logic gets an index that indicates the point of view of the rational agent, which indicates the separation between the different agents. Trans-classical logic is not a logic of "white and black" nor a logic with shades of grey, it is a logic of colors, a colored logic. The logics of the living tissue are colored ones
    Each color has its own formal and operative strength.
    There is no ambiguity and fuzziness in this notion of colored logics.
    But these colors are not only simply identical with themselves. Each colored system is able to reflect the other systems simultaneously in its own domain. This new ambiguity is produced by the complexity of the polycontectural logic as a whole with its interaction and reflection.
    [...]
    As a result of the plurality of formal systems as differently colored logics, each logic and complexion of these logics is localized in a structural space. Every logic has its own locus. Each logical locus gives place for the replication of other logics which are located at other logical loci. The theory of these ontological or pre-logical loci is called kenogrammatics.
    A more AI setting
    In other words, we can say, that the mirroring of one contexture by another, is a belief function. One system beliefes something about another system. This more linguistic perspective opens up a connection to the work about beliefe systems, beliefe logics etc. in Artificial Intelligence [...].
    But also to the Algebra of Reflection as it is proposed [...].

    [...]

    Dissemination of deductive systems
    [...]

    Dissemination of a framework of Tableaux Logics
    [...]

    Classical and polycontextural logics
    [...]

    Towards a metaphor of togetherness
    [...]

    Kenogrammatic foundations of togetherness
    [...]

    Tactics of implementing polycontextural systems
    Because we still don't have the trans-classical computing systems we are forced to model and to implement our trans-classical formalisms in the framework of classical concepts. One obvious way of modeling the disseminated objects is done by using many-sorted logics (many-sorted abstract types).
    The sorts of a many-sorted logic are treated as universes (names, contextures) of abstract objects.
    We shouldn't forget, that I am using in this implementation scheme the term sorts as a logical term, not to be confused with sorts as data-types. That means, that our sorts have also to include the control structures of the programming languages. Poly-sorts in this sense are not only different vocabularies and dictionaries but are also implement ing different control structures. And other stuff too.
    [...]

    Towards poly-versal algebras
    [...]

    Polylogical abstract objects
    [...]

    [...]

    Problems of the beginning and the beginning of problems
    The beginning as zero
    [...]
    The term zero seems to be a very priviledged object. It is the beginning of everything, in this sense it is not only a beginning of many other beginnings, but an origin.
    It is called an initial object. And later we can show that there is one and only one such initial object, all others are strictly isomorphic to it. The whole richness of the pluralities of beginnings is reduced to the general and abstract initial object as the only origin.
    [...]
    In the case of the static approach we have only the possibility of reaching the different zeros from a zero in a given system. That is, the zero of a neighboring system is reached as the neighbor of zero in a chosen system. Functions which are not zero do not have a neighbor in another system which is a zero function.
    This statical situation is radically changed in a dynamic system. Each function can have its own zero neighbors. Arithmetically speaking each number in one system can change its functionality to a beginning in another system. And each beginning in one system can be an ending in another system. Therefore, there is no absolute beginning needed, and an ending has not to be connoted with attributes like potential or actual or factual or whatever type of infinity nor with the concept of finity. All this Greek heritage will be in the play in a much later step of arithmetical thinking.
    [...]
    But there are other ways of thinking, too. It has its occurrence in Hegels Logic and its further development in Gunthers Natural Numbers in Trans-Classic Systems.
    As I have shown before, the idea of proemiality is to inscribe the difference which constitutes all relations and operations as such. Proemiality is the prelude to all operations in formal systems. It is the constitution of all institutions as formal systems. A transclassical approach to the problems of introducing natural number systems is therefore to apply the proemial relation, that is the strategy of chiasm, onto the arithmetical system. [...]
    This leads to a characterization of a dynamical approach written, inscribed, as a chiasm between the four terms: inital, final, successor, predecessor.
    This chiasm or proemial relation between initial and final, successor and predecessor, does not need a fixed beginning, it doesn't force us to accept a decisionist beginning or start of the abstract system by a privileged initial element, written as a introductory rule of level zero. On this level, there is also no need to be concerned about infinities of all sorts.
    On the other hand, it offers a mechanism for a mediating interplay of cognitive and volitive structures and actions in a formal system.
    [...]

    [...]

    Where is the problem?
    Counting robots
    [...]
    As I tried to show, our understanding and our formalization of natural numbers is based on a very deep intuition and an insight in the very nature of numbers.
    But how could I presuppose that my robot or my extra-terrestrial visitor has the same deep and well-founded intuitions? How could my robot even have any intuitions?
    And surely, nobody has ever seriously asked a human child if it really wants to learn all this stuff.
    Obviously, if we want, or have to, construct a robot, that is, an artificial system, which is able to use numbers, we have to be able to teach this system from scratch everything which is needed to understand and to use numbers.
    [...]

    Computational Ontology and the Problem of Identity
    [...]
    "Real-world computer systems involve extraordinarily complex issues of identity. [...] The aim of the Computational Ontology project is to focus on identity as a technical problem in its own right, and to develop a calculus of generalized object identity, one in which identity -- the question of whether two entities are the same or different -- is taken to be a dynamic and contextual matter of perspective, rather than a static or permanent fact about intrinsic structure." Brian Cantwell Smith
    "By the way, what is static and what is dynamic may be in the eye of the beholder. 'We suggest...that many grammatical frameworks are static formalizations of intuitively dynamic ideas',.." Yuri Gurevich
    "Current OO notations make no distinction between intra-application variability, for example, variability of objects over time and the use of different variants of an object at different locations in an application, and variability between applications, that is, variability across different applications for different users and usage contexts." K. Czarnecki, U. W. Eisenecker, Generative Programming

    Identity

    Equality

    Bisimulation
    [...]

    Kenogrammatic decomposition and bisimulation
    [...]

    Sameness in PCL-Systems
    Identity vs. diversity.
    Equality vs. sameness vs. non-equality (?)
    Sameness as the basic category of polycontextural systems.
    Gleichheit (Heidegger) likeness

    Cloning the Ur-Logik
    [...]

    Toward combinatory poly-logics
    [...]
    Two ways of modeling: dissemination and fibering Additionally to the approach of disseminating systems by means of proemiality, distribution and mediation, we can model this procedure in the category framework of logical fibering. This approach is well known by the work of Pfalzgraf (1988 - ).
    Maybe it is possible and helpful to make the distinction that the disseminatory approach is corresponding more to a proto-theoretical thematization whereas the category theoretical approach reflects more a meta-theoretical point of view.

    Some Applications
    Pragmatics of cloned natural systems
    [...]

    Relativization of Inductive Definitions [...]
    Turing Machines
    [...]
    [...] the today reality of computation is far beyond of what is conceived by Turing Machines. Instead of algorithms, one of the new metaphors and challenges seems to be interactivity, in all its forms.
    Therefore, it is possible to start a more deconstructing reading of the concept of Turing machines and to introduce step by step a new type of machines, the polylogic machines [...]. Nothing is wrong with the classical concepts. Neither with the known extensions, like o-machines, etc., of Turing himself and others. And nevertheless there is no reason to not to try another approach, surely not to the exactly same challenges, but strongly related to each other and interwoven in some family resemblance (similarity, likeness).

    Polylogic Graph Reduction Principle

    Computable Metamorphosis

    Programming languages in the context of proemiality
    polyLISP
    [...]
    Agents and Ambiguity
    "Understanding natural language also requires inferring hidden state, namely, the intention of the speaker. [...] Problem-solving agents have difficulty with this kind of ambiguity because their representation of contingency problems is inherently exponential."
    [...]
    From a technical point of view of poly-contextural systems there is no reason to think that the complexity of dealing with ambiguity has to grow exponential.
    Is there a method in the poly-contextural approach to reduce complexity from the exponential to the linear type?
    [...]

    Internal vs. external descriptions of interactions
    [...]
    Interpretations: Chiasm of memory and processor
    Kenogrammatics is not only neutral to the distinction of number and notion as Günther pointed out, it is also neutral to the distinction of program and data, and on a hardware level it is neutral to the distinction of processor and memory. Kenogrammatics gives space for an interlocking mechanism between say, memory and processor. In other words, in trans-computation, which is close to the theory of living systems, the living tissue, to be a processor or a memory is purely functional and is ruled by the as-category."

    Comment
    A lot of keywords are also included in The Proposal, which shows that our Evoos truly has many if not all and even more properties.

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part I
    The new scene of AI: Cognitive Systems?

    Some brand new trends, similarities and complementaries to my work, even filiations, and other connections are listed.
    [...]

    Some Gurus
    Marving Minsky
    Push Singh

    Aaron Sloman
    The Cognition And Affect [(CogAff)] Project
    [...]
    [A student/researcher]
    My PhD thesis topic is "Distributed Reflective Architectures for Anomaly Detection and Autonomous Recovery''. Some technical reports are available in the Cognition and Affect Directory.
    The aim of the research is to explore architectures which allow an autonomous system to detect and recover from anomalies without user intervention. An anomaly is any event that deviates from the model-predicted state of the world and may also occur in the system's own software or hardware. This means that the system must have a model of its own operation (reflection).
    I am exploring forms of distributed reflection using a multi-agent network, where each agent may specialise in a particular aspect of the system's operation. The network is not intended as a team of cooperating agents but instead as a decentralised control system for a single autonomous agent (a "multi-agent agent"). The idea is inspired by various branches of philosophy and biology, in particular by autopoiesis theory , immune system models and Minsky's Society of Mind concept. [Bingo!!!]

    John Laird
    Interactive game research
    [...]
    My primary research interests are in the nature of the architecture underlying artificial and natural intelligence. [...] Within AI my work has included research in general problem solving, the genesis of the weak methods, the origins of subgoals, general learning mechanism, interacting with external environments, learning by experience and by instruction, and integrating reactivity, planning, and learning, all in the service of constructing complete autonomous intelligent agents. [...] More recently, my research is concentrating on creating human-level AI agents for interactive computer games.
    I am a founder of Soar Technology.
    [...]

    Peter Wegner
    Interaction Machines
    [...]
    Models of Interaction
    "Interaction as a Conceptual framework for Object-Oriented Programming"
    [...]
    The paradigm shift from algorithms to interaction captures the technology shift from [...] to object-based, agent-oriented, and distributed programming. The radical notion that interactive systems are more powerful problem-solving engines than algorithms is the basis for a new paradigm for computing systems built around the unifying concept of interaction. This talk will extend Turing machines to interaction machines, show that interaction machines are more powerful than Turing machines and show that interaction machines are more natural as a model for objects, agents, design patterns and applications programming. More information can be found in an article in the May 1997 Communications of the ACM on "Why Interaction is More Powerful than Algorithms".
    Observability and Empirical Computation
    "Interactive Foundations of Computing Observability and Empirical Computation"
    [...]
    Interaction machines, which extend Turing machines with input and output actions, are shown to be more expressive than Turing machines, both by a direct proof and by adapting Gödel's proof of irreducibility of mathematics to logic. Observational expressiveness, defined by distinguishability of system behavior, provides a common metric for comparing the expressiveness of algorithms and interactive systems that also expresses the explanatory power of physical theories. The change in metric from algorithmic transformation to interactive observation captures the essence of empirical computer science. Observation in physics corresponds to interaction in models of computation. The relation between observers and the systems they observe is examined for both computation and physics. [...]
    Interactive extensions of Plato's cave metaphor and the Turing test confirm that interactive thinking is more expressive than logical reasoning. Turing test machines with hidden interfaces express interactive thinking and collaborative behavior richer than the traditional Turing test. [...] Pragmatics is introduced as a framework for extending logical models with a fixed syntax and semantics to multiple-interface models that support collaboration among clients sharing common resources. [...]
    More information can be found in the May 1997 Communications of the ACM in an article "Why Interaction is More Powerful than Algorithms", in the February 1998 Theoretical Computer Science in an article "Foundations of Interactive Computing".
    [...]

    [...]

    Dov Gabbay
    Combining logics
    Fibring, labelled deductive systems
    Polycontextural logics and fibred categories
    [...]
    Why combine logics?
    Besides leading to very interesting applications whenever it is necessary to work with different logics at the same time, combination of logics is also of interest on purely theoretical grounds.
    The practical significance of the problem is clear, at least from the point of view of those working in knowledge representation (within artificial intelligence) and in formal specification and verification (within software engineering). Indeed, in these fields, the need for working with several formalisms at the same time is the rule rather than the exception. For instance, in a knowledge representation problem it may be necessary to work with both temporal and deontic aspects. And in a software specification problem it may be necessary to work with both equational and temporal specifications.
    [...]
    Why fibring?
    Among the different techniques for combining logics, fibring, as originally proposed by Dov Gabbay, deserves close study because of its generality and power. Fibring explains many other combination mechanisms as special cases and it is powerful enough for the envisaged applications.
    What is fibring?
    But what is fibring? The answer can be given in a few paragraphs for the special case of logics with a propositional base, that is, with propositional variables and connectives of arbitrary arity.
    The language of the fibring is obtained by the free use of the language constructors (atomic symbols and connectives) from the given logics. [...]

    Brian Smith
    Reflection
    Introspection
    Embeddednes
    On the Origin of Objects
    [...]
    TUNES
    Introductory Blurb about Reflection
    Reflection is the ages old concept of someone thinking about oneself. Yes, there are other meanings to the word; this is the one we consider here. In Computer Science, Reflection is a powerful conceptual tool with such various applications as simplifying logical proofs enough to make them physically tractable, enabling dynamic (run-time) evolution of programming systems, transforming programs statically (at compile-time) to add and manage such features as concurrency, distribution, persistence, or object systems, or allowing expert systems to reason about their own behavior. [Bingo!!!]

    Pattie Maes
    Computational Reflection [Bingo!!!]

    Self-referentiality
    Artificial Life Approaches [Bingo!!!]
    Kampis, Georg (1991): Self-modifying Systems in Biology and Cognitive Science.
    [...]

    Some comments on Hypercomputing
    [...]
    Grand Challenge ([a student/researcher])
    A new proposal dealing with a model of 1013 elements is on the way. Not mentioning the principle problems of such an approach.
    How can we build human scale complex systems?
    [...]
    "Devise techniques for constructing software and hardware systems consisting 10^13 subprograms (approximately the number of cells in the human body) that carry out useful functions that in some sense are as complex as that of a mature human being."
    Background and Motivation
    The complexity and sophistication of living creatures dwarfs human designed systems. Humans are constructed using a development process that starts with a single fertilised cell.
    Somehow, given the right conditions, this cell replicates and differentiates itself to form a human baby that learns how to talk, think, do mathematics, and write Grand Challenge proposals. We know that in some sense a human can be produced from the interpretation and decoding of a string of information, yet we have almost no idea how to construct large scale systems using this type of mechanism. Human top-down design methods appear to be unable to create stable complex systems on anything like the scale of living organisms. We appear to be faced with huge combinatorial problems. As we move toward the construction of nanoscale physical computing systems we will face enormous problems that originate in the problem of getting the information from our macroscopic world into the microscopic world of the computational elements. Living systems have an extremely elegant solution to this problem: the information for the construction of the whole is contained in all the basic computational elements (cells). Computer algorithms formulate computation as a transformation of inputs into outputs. Living systems are not best viewed in this way. In computer algorithms programs and data are rigidly divorced from one another, in living systems there appears to be no clear boundary between programs and data. Currently we have virtually no idea how to construct systems of this nature. Computer algorithms and electronic hardware are extremely sensitive to errors that can have catastrophic consequences. Living systems do not exhibit this propensity for error and are massively robust. Trying to build systems that are in some formal sense as complex as human beings may shed light on many fundamental questions concerning the nature of computation and could be useful: selfrepair, intrusion detection, adaptive behaviour, intelligence ... [Bingo!!!]
    The PCL approach to neural complexity
    Günther's strategy in contrast to the neural science approach.
    In other words: there are not only theoretical but also practical reasons why research in the neural system of the brain will never unrevel how the brain contributes to the solution of the riddle of subjectivity.
    However, there is another way to approach the problem. Instead of working uphill from the neuronic level we may ask: what is the highest achievement of the brain? In other words: what mental world concept does it produce? We can describe this world concept in semantic and structural terms and work down from there posing the question: how must a brain be organised in order to yield such images with their peculiar semantic significance. This types of investigation has hardly started, but it is as important and necessary as the other one.
    Günther, Cognition and Volition
    Obviously, the PCL approach takes the other option of producing cognitive systems, not denying the reasonability of the more conventional empiricist approach of modeling, that is cloning nature, here the brain of a little child.
    It is helpful to use a decades old distinction Gotthard Günther's of homunculus approach versus the robot approach in AI & AL.
    From the point of view of the PCL approach the grand challenge is not so much to build an artificial child but some small animals, maybe insects, not only as mini-robots, but as cognitive and volitive systems.
    Complementary to Gunther's strategy I mention another strategy.
    Living systems have an extremely elegant solution to this problem: the information for the construction of the whole is contained in all the basic computational elements (cells). [Bingo!!!]
    This situation is well mirrored in the polycontextural approach. All contextures, say as logical systems, contain the operators of the local and global interactions. [Bingo!!!]
    Computer algorithms formulate computation as a transformation of inputs into outputs. Living systems are not best viewed in this way.
    Contextures are together by mediation as a form of structural coupling. Informational procedures like input/output communications play a secondary role.
    In computer algorithms programs and data are rigidly divorced from one another, in living systems there appears to be no clear boundary between programs and data.
    This corresponds clearly to the proemial relationship between data/programs and distributed systems, that is contextures. But this description is correct only from a external point of view. Internally there is no fuzziness between data and programs but a complex chiastic interchange of both. The functionality of being a program or being a set of data is strict and dualistic. But a single program can change into data and vice versa in a more fundamental sense than we know it already from programming.
    [...]
    Currently we have virtually no idea how to construct systems of this nature.
    Maybe, the polycontextural approach can offer some implementable ideas.
    Therefore, the PCL approach offers a strategy which is beyond well known procedures of producing complexities, like recursion, fractal and chaotic processes. [Bingo!!! But we concluded that the fractal and chaotic approach is well offers a strategy which is beyond PCL, and also that fractality and proemiality are interconnected and about the same subject and problem.]

    [...]

    Similar or complementary work to the PCL-Project
    On Architectonics

    On Reflectionality
    meta-level architectures

    On Interactivity
    intentionality
    communication
    cooperation
    cocreation

    On Positionality
    [...]

    On Proemiality
    [...]

    On Polycontexturality
    combining logics
    fibred categories

    What are the decisive advantages of the PCL approach?
    More formalizations than simulations
    The main advantage of the PCL approach lies in the fact that it's mode of thematization is basically formalization. Formalization in the sense of PCL produces a very strong connection to operativity on the levels of implementation and realizations.
    The PCL approach can be understood as a complementary project to the classical and neo-classical approaches of AI. [...]
    [...]
    A new epoch of production lines
    Therefore, a whole range of working prototypes of new products involving complexity, self-referentiality and interactivity are conceivable.
    TransComputation: A new paradigm of computing
    Nevertheless, the PCL approach is new and distinct from other well known modern approaches. More exactly, the PCL approach is developping not only some new programming languages as tools but a new medium of thinking and programming in the sense of a change of the very paradigm of computation. The Grand Challenge today is to understand computing beyond information processing and beyond the framework of classical mathematics.
    [...]
    The mega-procedure of proemiality
    The main method of the polycontextural approach can be seen in the concept and apparatus of proemiality. The proemial relationship can be instrumentalized to a strong tool in dealing with the project of extending the scope of computing.
    [...]
    No-nonsense; but well-founded in scientific avant-garde
    Also the PCL approach is genuine new and contemporary, it has a well founded history in the development of philosophy, cybernetics and logics. Even if the main stream of second-order cybernetics is not well aware about the fact that PCL, by the work of Gunther, Pask, von Foerster and others, is a decisive framework of cybernetical thinking since the very beginning of the new cybernetics as it was developed at the BCL.

    Comments on the Grand Chalenge Project
    [...]
    1: The Turing paradigm [Bingo!!!]
    [...]
    2: The von Neumann paradigm [Bingo!!!]
    [...]
    fetch-execute-store model of program execution. Rather, other architectures already exist, for example, neural nets, FPGAs. [Bingo!!!]
    3: The output paradigm
    [...]
    4: The algorithmic paradigm [Bingo!!!]
    a program maps the initial input to the final output, ignoring the external world while it executes. Rather, many systems are ongoing adaptive processes, with inputs provided over time, whose values depend on interaction with the open unpredictable environment; identical inputs may provide different outputs, as the system learns and adapts to its history of interactions; there is no prespecified endpoint.
    randomness is noise is bad: most computer science is deterministic. Rather, natureinspired processes, in which randomness or chaos is essential, are known to work well. the computer can be switched on and off: computations are bounded in time, outside which the computer does not need to be active. Rather, the computer may engage in a continuous interactive dialogue, with users and other computers.
    5: The refinement paradigm [Bingo!!!]
    incremental transformational steps move a specification to an implementation that realises that specification. Rather, there may be a discontinuity between specification and implementation, for example, bio-inspired recognisers
    binary is good: answers are crisp yes/no, true/false, and provably correct. Rather, probabilistic, approximate, and fuzzy solutions can be just as useful, and more efficient.
    a specification exists, either before the development and forms its basis, or at least after the development. Rather, the specification may be an emergent and changing property of the system, as the history of interaction with the environment grows.
    emergence is undesired, because the specification captures everything required, and the refinement process is top-down. Rather, as systems grow more complex, this refinement paradigm is infeasible, and emergent properties become an important means of engineering desired behaviour.
    6: The "computer as artefact" paradigm
    [...]
    A thought provoking list of ideas indeed and as you can see the reference to nature and biology is all over it, or as put in the proposal paper:
    Many computational approaches seek inspiration in reality (mainly biology and physics), or seek to exploit features of reality. These reality-based computing approaches hold great promise. Often, nature does it better, or at the very least differently and interestingly. Examining how the real world solves its computational problems provides inspirations for novel algorithms (such as genetic algorithms or artificial immune systems), for novel views of what constitutes a computation (such as complex adaptive systems, and self-organising networks), and for novel computational paradigms (such as quantum computing).
    ...
    Meta-heuristic search techniques have drawn inspiration from physics (simulated annealing), evolution (genetic algorithms, genetic programming), neurology (artificial neural networks), immunology (artificial immune systems), plant growth (L-systems), social networks (ant colony optimisation), and other domains. [Bingo!!!]

    Non-Academic Projects [Bingo!!!]
    [...]
    Ben Goertzel
    Transhumanism
    General Artificial Intelligence Machine
    Seed AI
    [...]
    Also these projects don't touch the fundamentals of computation they try to develop a new paradigm of AI and AL.
    There are some correspondences between these transhumanist approaches and what is envisaged by Aaron Sloman and Marvin Minsky. It is the idea of "child computing" and Seed AI, which means not to develop a highly intelligent machine as an analogy to adult human intelligence but to invent the framework of learnability and growing which would be able even to develop interactively with its environment its own logic.
    From a general philosophical point of view these new trends go together with an "empirical turn" in computer science, back to nature (esp. today: biology) which is more reflecting the engineering aspect in contrast to the mathematical perspective.
    [...]
    Obviously, the PCL approach is not empiricist [empirical] in the sense mentioned by Wegner, Sloman, Minsky and the Transhumanist, it is by its polycontexturality and kenogrammatics beyond this dichotomist situation of rationalism and empiricism realized today by logic based symbolism and learnable and adaptive neural networks and other approaches.
    It is easy to mention that the polycontextural approach is beyond any sign systems, symbolic or statistic, or whatever, thanks to the pre-semiotic system of kenogrammatics.

    Seed AI, another myth? [Bingo!!!]
    Seed Computing is intrinsically interwoven with a fundamental belief in nature. We have to organize the conditions for learning and growing, nature will do the rest. This homunculus approach, which goes back to the medieval alchemists, has given up to understand and to know how to construct an intelligent system. Even with the assumption of a well reflected scientific knowledge about nature in the sense of molecular biology, neurophysiology etc. it is nature who has to realize the intelligence of the machine and not the engineer.
    Maybe the metaphor of the seed in Seed Al sounds friendly to nature but it denies from the very beginning the definition of seeds. Seeds don't come as singularities, seeds comes in the pluralities of thousands and more to realize some single succeeding plants. How can the metaphor of seeds be a leading metaphor to an engineering project? Probably only as the singular successful seed in denying its possibility as one of millions of unsuccessful seeds. It's great to see the successful growing of a seed. Hopefully it will happen, even against the (mis)leading metaphor, to Seed AI.

    Child like Computing, one more myth? [Bingo!!!]
    As Seed Computing, this approach [presented by C.S. with The Proposal and] favored by Minsky and Sloman is rooted in the metaphor "homunculus" and not in the metaphor of the "robot", constructed by engineers.
    "Child computing" seems not to be mainly rooted in (neuro)biology but in development psychology. Not only Heinz von Foerster but also Minsky refers to the famous Swiss behavioral psychologist Piaget as one of his main empirical source.
    The problem of inheritance is repeating itself again. The unsolved problems of Pigaet's psychology will appear as a serious obstacle in AI research.
    My favorite problem in Piaget's psychology is the relationship between accommodation and maturation.
    "There is no accommodation without maturation and there is no maturation without accommodation".
    [...] As long as we don't know what it means to be dialectical and second order cybernetic, I guess, it is not more than a hint in a direction which is probably not fully misleading. But to construct a system on the base of a vague hint is clearly not working.
    What happens? It is the typical situation of confusing description language with construction language.
    There is a lot to learn about the dialectics of Piaget's psychology. But not much about dialectics and its apparatus. The reason is simple, and well known, dialecticians have the strong belief that there is no formal dialectics as there is a formal logic. Each formalization of dialectics would automatically kill its basic dynamics. How can it be possible to construct an artificial developing intelligent system on the base of Piaget's psychology if this psychology is fundamentally based on dialectics and that the possibility of an operative dialectics is strictly denied by the dialecticians of all colors?
    It is exactly here the place where the work of Gotthard Günther comes into play. Before to construct an intelligent machine, he tried to develop a constructive and operational theory and formalism of dialectic systems, short of dialectics.
    Obviously, this position is in all aspects beyond the dichotomic paradigm of rationalism and empiricism, and therefor of the double metaphor of homunculus/robot."

    Comment
    None of those, who worked on something related to our Evoos, has ever referenced our work or communicated with us.

    The approch of Seed AI was intentionally misunderstood to ridicule and discredit our Evoos. Eventually, it is one singular seed that grows one singular plant. What the author also confuses is the difference between ontogenesis, phylogenesis, and overall genesis, including the evolution of the environment, which reflects 3.2 Funktionsweise eines Gehirns==Functioning of a Brain of The Proposal: "Insgesamt festzuhalten ist aber, dass das Phenomen der Körperbildung dem Phenomen der Bewusstseinsbildung vorrauszugehen scheint. Eine Ansicht ist, dass der Körper dem Bewusstsein entweder ontogenetisch (in der Entwicklung eines Kindes) oder phylogenetisch (in der Evolution der menschlichen Spezie) vorhergeht. Die Einnahme dieser Ansicht bedeutet auch, dass der physikalische Körper ein notwendiges Substrat für das Mentale sein muss (siehe [Damasso]).==But all in all it has to be noted that the phenomenon of body formation seems to precede the phenomenon of consciousness formation. One view is that the body precedes consciousness either ontogenetically (in the development of a child) or phylogenetically (in the evolution of the human species). Taking this view also means that the physical body must be a necessary substrate for the mental (see [Damasso])."
    Also note that the praised openess for new ideas and experimental concepts of the author himself suddenly disappeared and such ideas are torn in pieces by him. For sure, we already do know the answer why it happened with out work of art: He wanted to steal essential parts of Evoos and simultaneously destroy the rest of Evoos at once.

    Nevertheless, at this point we have the next crystal clear and undeniable proof that the author has known The Proposal of C.S..

    Even if the author (Kaehr) of the documents titled "Introducing and Modeling Polycontextural Logics" and "Derrida's Machines", the authors of the documents titled "Arrow System", Sloman "[Cognition and Affection (CogAff) architecture]", Minsky "[panalogy, emotion machine]", N.N. "CoMMA-COGs", Goertzel Seed AI, Child like Computing, General AI, a student/researcher of Sloman "" holonic agent system, MAS, and Autonomic Computing, and Grand Challenge, as well all other authors, who have presented a part of our Evoos, have done so by utilizing the most clever tricks, it nevertheless becomes obvious in the correlation of their individual views and the final overall view presented in this and our other publications, explanations, clarifications, and investigations that they all are talking about our Evoos.
    Indeed, there is no doubt that

  • on the one hand the author of the Derrida's Machines based on PCL has copied our Evoos, and
  • on the other hand the author of the Arrow System is talking about the same cybernetical Abstract Machine (AM) called by the other author Derrida's Machines, and also
  • CoMMA-COGs, Seed AI, Child like Computing, General AI, ontology-based, agent-based, Agent Chameleons, NEXUS, etc., etc., etc..

    We have shown in all cases that all bees were dancing and are still dancing around our Evoos at exactly the same time, which indeed must have been the only new thing and obviously was the only new thing at that time. It cannot be explained otherwise by showing an alternative ordinary technological progress or any other progress. Is not it?

    This can only lead to the implication or ultimate conclusion that the author of the Arrow System has either ...
    We already mentioned this observation and forensic proof in relation to works of the field Agent-Based System (ABS).

    One can also understand now why we formulated the section Integrating Architecture of the webpage Overview of the website of OntoLinux in a way, which might sound a little cryptic. In fact, we were talking all the time about

  • object-level,
  • meta-level, but also
  • meta-meta-level (Binary-Relational Model (BRM), proemiality (e.g. Proemial Relationship Model (PRM), polycontexturality (e.g. polylogic, PolyContextural Logic (PCL))), and even
  • meta-meat-meta-level (kenogrammatics, morphogrammatics, ontologics).

    In fact, it is not the Cognition and Affect (CogAff) architecture or the Emotion Machine Architecture or the integration of both, but our Ontologic System Architecture (OSA), which includes our Evoos Architecture (EosA) and is considerably different and truly original and unique.

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part I
    Some non-technical background texts to PCL
    Limitations and Possibilities of Communication and Co-Creation
    [...]
    [...] With the introduction of the "proemial relationship" a first effort to realize this trans-logical interplay of unity and diversity is made.
    [...]

    American Second-Order Cybernetics
    The American second-order cybernetics points in particular to the necessity of such a shift:
    Heinz von Foerster:"The logic of our western industrial corporate society (with limited liability) is unidirectional, deductive, competitive and hierarchical, and the keystones of its paradigm are the claim of objectivity and the theory of types, which exclude in principle the autonomy of paradox and of the individual. In the scientific revolution that we know create and experience, however, we perceive a shift from causal unidirectional to mutualistic systemic thinking, from a preoccupation with the properties of the observed to the study of the properties of the observer."
    That which cannot be grasped or expressed from the fundament of logo-centric scientific thinking is an operative time-structuring in which linearity and tabularity, the fields of ruptures, emanation and evolution are communicated as complementary communication structures. Time as a complex system of emanation and evolution is not thought of or conceived as "present" but from the difference, differance [...], i.e., the discontexturality of the contextures "past" and "future" whereby time is freed from the concept of being [... .]
    A further self-incapacitation of thinking occurs not only through the prohibition of basic self-references in formal systems, but also through the presupposition, the apriori of "Potential Realisability" [...] forming the basis of all operative systems. An additional hindrance results from its idealistic concept of infinity which absorbs considerable brainware energy and wastes communication possibilities.
    [...]
    The so-called "New Logic of Information and Communication" belongs to the old paradigm of logocentrism if it is performed as a field of the "New Rethorics" [...], the "Dialogik" [...], or the "Dialectical Logic" [...]. Simply because their aim is the unification of the proponents and the opponents under the summum bonum of rationality and truth. This "new logic of information" is not polylogic but still remains monologic.
    [...]
    Transformation of Man-Machine Communication
    The purely instrumental understanding of technology which predominates today both in engineering and in the humanities is insufficient. A new understanding of the man-machine symbiosis as a heterarchical interplay between mechanism and creativity needs to be developed and practised in connection with the development of new architectures in computer technologies.
    Future art should take a position of sovereignty of creation and production in cooperation with computer technology and new methodologies of thinking. An option could be a co-operation with the theory of polycontextural thinking and operating.

    In a nutshell: Proemiality and Polycontextural Logic
    The idea of an extension of classical logic to cover simultaneously active ontological locations was introduced by Gotthard Günther [...]. His aim was to develop a philosophical theory and mathematics of dialectics and self-referential systems, a cybernetic theory of subjectivity as a chiastic and heterarchical interplay of cognition and volition in a (de-)constructed world.
    Polycontextural Logic is an irreducible many-systems logic, a dissemination of logics, in which the classical logic systems (called contextures) are enabled to interplay with each other, resulting in a complexity which is structurally different from the sum of its components. Although introduced historically as an interpretation of many valued logics, polycontextural logic does not fall into the category of multiple valued logics, fuzzy or continuous logics or other deviant logics. Polycontextural logics offers a framework for new formal concepts such as multi-negational and transjunctional operators.
    The world has "indefinitely" many logical places, and each of them is representable by a two-valued system of logic, when viewed locally and isolated from there neighbour systems. However, a coexistence, a heterarchy of such places can only be described by the proemial relationship in a polycontextural logical system. We shall call this relation according to Günther the proemial relationship, for it prefaces the difference between relator and relatum of any relationship as such. Thus the proemial relationship provides a deeper foundation of logic and mathematics as an abstract potential from which the classic relations and operations emerge.
    The proemial relationship rules the mechanism of distribution and mediation of formal systems (logics and arithmetics), as developed by the theory of polycontexturality. This relationship is characterised as the simultaneous interdependence of order and exchange relations between objects of different logical levels

    Discontexturality: The Art of Thinking Art in ThinkArt
    Creativity and Computability beyond Science and Metaphor
    Art in all its forms has always been a producer of Metaphors. From Plato's writings against Art to Cyberspace there is a celebration of Metaphors and a strict fight against them.
    The opposite of Metaphor or Metaphoricity is the Formality of mathematical Science.
    [...]
    But the role of Science is changing, too. Computer Science is not well understood as the Science of Problem Solving for real world Problems. The new Paradigm of Science is Construction. Computer Science appears now more as Reality Construction than as Problem Solving. And Reality Construction is more an Art then a Science.
    The common ground of Metaphors and concepts is their relation to (the) one and only one idea of rationality, truth, pleasure, beauty and mankind (humanity); its logocentrism and mono-contexturality. [...]
    After having been experienced in a reversal of the order between Science and Metaphor we are now forced to invent a new interplay between Art and Science which rejects the common ground of Metaphor and concept; its Digitalism. Otherwise we would play the same game in reverse. Art has to refuse to be a servant of Science in producing Metaphors.
    Formal concepts as "relation", "linearity", "number", "computability", "net", "information", "interaction", "interface" or "system", "recursion", "re-entry" etc.etc. are discovered and unmasked as belonging to the new continent of Metaphoricity.
    It seems to be necessary to invent/discover a new transdisciplinary cooperation between creativity and computability, between Art and Science, Concept and Metaphor.
    [...]
    System theory lives from its syntax and more concrete from its vocabulary, its set of signs. This signs are pregiven, they are the real starting point of the tectonics of the formal system. The same is valuable for all applicative systems, too. Even for Autopoietic Systems its components are determining the identity of the system; the system or the operations in it are not defining there components.
    In contrast to system theory for structural or kenogrammatical systems there are no elements, components or vocabularies outside of the system. There is no sign repertoire as the very first level of the tectonics of the system. And there is no proper starting point of its syntactical structure. The operations - "over the components" - of the systems are producing there proper components; but not as a self-referential and circular organization, but as a chiastic and proemiell event or mechanism of self- and co-creation.

    Some reflections about the structure of the new Art Material
    A certain chain of -ISMs
    Ontological developments in recent history confronts us with a chain of primary or leading concepts: substance (material), function, system and structure. This basic concepts are organizing and determining the paradigms of Substantialism, Functionalism, Systemtheory and Structuralism - or better: Contexturalism, Kenogrammatics, Proemics.
    [...]

    The formal as the new material
    From the point of view of kenogrammatics (kenos, Greek empty) strict formal terms as "relation", "linearity", "numbers" etc. used in mathematics and formal logic are discovered and unmasked as being purely metaphorical.
    The prototype of a formal system, of operativity and calculability: the Arithmetics or formalized theory of natural numbers is unmasked as a metaphorical system. Its basic terms are not defined formally they are used in the general context of common sense in a non-formal but figurative and metaphorical sense. "Natural Numbers are given by God" (Kronecker)
    [...]
    The whole mathematics is unmask[ed] as a masquerade of marks and masks.
    At this epoch the sign was always in the role of the repeater [...].
    To use signs and to sign with signs needs a subject as an actor of the process of signing. To mark is un-masked of subjectivity; marks are marks of marks - and only marks are marks of marks; of theme-selves, themselves without a self of them.
    Therefore, we begin the game again with the kenogramm; it is neither mask nor mark.

    About the Art of Programming Art
    [...]

    [...]

    "Well-defined" problems in creating problems
    [...]

    What could we understand by creativity?
    [...]

    What could we understand by creativity?
    Developments in Computer Science and Second-order Cybernetics shows us, that the new movements in Information Technology can be understood as a radical new structural cut between "subjectivity" and "objectivity". We propose that this structural view of the developments opens up a more relaxed understanding of cyberculture than it is defined by the paradigm of information processing.
    [...] The new cut of computer revolution is purely "contextural" and is not well understood in terms of space, time, reality, identity and information and its deviants which concepts still belongs to the classical paradigm of modern science which itself is based on the first cut of cartesian philosophy and science.
    [...]
    Gotthard Gunther wrote in "Cognition and Volition. A Contribution to a Cybernetic Theory of Subjectivity" (1970):
    ... since the Aristotelian epistemology required a clear cut distinction within subjectivity between subject as the carrier or producer of thoughts and the thoughts themselves, it was reasoned that the subject of cognizance could have rational thoughts without being a rational entity itself.
    It should be kept in mind that, if we postulate a polycontextural Universe, the barriers which now cut through this empirical world, have lost nothing of their intransigency by being multiplied.
    In order to integrate the concept of discontexturality into logic we have introduced the theory of ontological loci. Any classical system of logic or mathematics refers to a given ontological locus; it describes the contextural structure of such a locus more or less adequately. But its statements-valid for the locus in question-will be invalid for a different locus.
    How can artists help to revolutionise the new technologies? And how can new technologies help to transform art and the self-understanding of the artist? Which new framework of logics and rationality do we need to formulate and to formalise this new form of thinking beyond classical dichotomies?
    [...]

    To use and to be used by technology and beyond
    [...]

    Questions and Outlooks faced at the Academy of Media Arts
    [...]
    Taking this fact serious means that artists should not only be trained in using tools but should learn to design and program their own computing environments in a creative manner, i.e. to develop their own systems. This activity of personal programming and constructing interfaces by artists - which are different from those of innovative engineers - has to be supported and guided by a new framework of conceptual orientation.
    [...]

    What are the new Paradigms of Computation?
    Our method of concept mining (in contrast to data mining) produced an interesting list of developments in post-classical computing in the field of Beyond Computation. It's all about surpassing the limits of algorithms and Turing Machines. [...]
    [...]
    Reflectional Programming
    [...]
    Computational Ontology beyond Identity
    Real-world computer systems involve extraordinarily complex issues of identity. Often, objects that for some purposes are best treated as unitary, single, or "one", are for other purposes better distinguished, treated as several. The aim of the Computational Ontology project is to focus on identity as a technical problem in its own right, and to develop a calculus of generalized object identity, one in which identity -- the question of whether two entities are the same or different -- is taken to be a dynamic and contextual matter of perspective, rather than a static or permanent fact about intrinsic structure. [...]
    Polycontextural Logic: Transjunctions of viewpoints and contextures
    [...]
    Logical fiberings prove to be particulary suitable for modeling communication and interaction between co-operating agents, due to the possibility to switch between a local/global point of view which is typical for this framework. [...]
    Topics in Co-Operation, Interaction, Co-Creation
    Algorithms and Turing machines (TM) have been the dominant model of computation during the first 50 years of computer science, playing a central role in establishing the discipline and providing a deep foundation for theoretical computer science. We claim that TMs are too weak to express interaction of object-oriented and distributed systems, and propose interaction machines as a stronger model that better captures computational behavior for finite interactive computing agents. Moreover, changes in technology from mainframes and procedure-oriented programming to networks and object-oriented programming are naturally expressed by the extension of models of computation from algorithms to interaction. [...]
    Patterns of Self-(Organisation, Reference, Amendment, Reproduction)
    Ideas of self-reference (and its self-modification), and their application to cognition have a much longer history, however. [...]

    How to Organise our Work of Programming Art?
    XP as a specific method of generating software
    eXtreme Programming [...]
    UML as a general method of modelling projects
    The Unified Modeling Language (UML) is the industry-standard language for specifying, visualizing, constructing, and documenting the artefacts of software systems. Using UML, programmers and application architects can make a blueprint of a project, which, in turn, makes the actual software development process easier. Mostly computer art projects are much too complex and too ambitious to be realised in the context of the usual art funding. UML can help to design a conceptual model of the project. It could be reasonable to accept this UML modelling as a first realisation of the concept of/as the art work.

    Which Languages for the Art of Programming?
    Interactive Programming [...]

    Computation and Metaphysics
    [...]
    Computers in the sense of transclassical cybernetics are not simply a tool or a medium but much more a radical new step in the understanding and transformation of the world and human nature in a trans-terrestrial world game.
    Computation and Metaphysics today
    Questions of cracking identity in formal logical and computing systems are finally recognized now by leading computer scientists.
    [...]"

    Comment
    Despite we disagree with the point of view monologic vs. polylogic and so on due to the

  • linguistic problem,
  • Symbol Grounding Problem (SGP), and
  • our point of view on the whole matter,

    the overall debate nevertheless proves that our Evoos and OS are works of art and not just scientifical concepts or system descriptions.
    Therefore, copyright worldwide.

    Because we have

  • a graph can be described as serial list, holds also for morphogrammatics and kenogrammtics, and proemial relationship, and even processe (e.g. an object can be stopped executed, serialized, transfered (e.g. migrate), and deserialized, and executed again),
  • the document titled "The Mystery of the Tower Revealed: A Nonreflective Description of the Reflective Tower" discussed in the OntoLinux Further steps of the 21st of August 2010), the reflective tower can be described formally as linear, acyclic metastructure, and Reflective Tower is Abstract Machine (AM) or Virtual Machine (VM), and an AM or VM is formally describable, hence objectivity at first and subjectivity a later feature or functionality of the AM or VM,

    "Pfalzgraf: Fibring logics and multi-agent robotics
    Today's concepts of fibering or combining logical systems does not include the reflectional aspects of interaction and togetherness. Basically its ontology remains mono-contextural. The problem of multitudes is shifted form the ontology or the universe of the underlying logic to its sorts in a many-sorted logic. Multitudes, plurality is therefore based in singularity and unizity as well explained in the theory of institutions. To model reflectional interacting systems we have to introduce a polycontextural ontology with all the consequences for logics, arithmetics, semiotics and so on."
    "Two ways of modeling: dissemination and fibering
    Additionally to the approach of disseminating systems by means of proemiality, distribution and mediation, we can model this procedure in the category framework of logical fibering. This approach is well known by the work of Pfalzgraf (1988 - ). Maybe it is possible and helpful to make the distinction that the disseminatory approach is corresponding more to a proto-theoretical thematization whereas the category theoretical approach reflects more a meta-theoretical point of view."
    But reflection comes through the Abstract Machine (AM) or Virtual Machine (VM) described by many-sorted logics. A reflective Distributed os, Holonic AS, and reflective Multi-AS, like our Evoos integrating all together, are merely missing the formal language for their description.

    Therefore, PCL is not required and many-value logics sufficient, already reflective three-valued logis does the job, and fibring logics mixing fuzzy logic does it anyway.
    But the description might not be unique respectively more than one description might exist, which also have the same size in relation to the chosen media for presentation and compression ration in relation to AIT.

    [to be continued]

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part I
    Exploiting Parallelism in PCL-Systems
    Polycontextural Strategy towards the Challenge of Parallelism
    Motivations
    [...]
    Living systems are not based on expensive exploitation of nature. They are naturally non-expensive but highly complex. Complexity in contrast to complication (=nowadays complexity) is a strategy of nature that is not repeated in science. Science behaves quite non-natural in thinking and replicating natural systems.
    [...]
    Today's reality of computing isn't mirrored properly in the framework of mono-contextural concepts, models and methods. To realize parallelism, it would be much more strait forward to understand that parallelism is a quite special case of interactivity between more or less autonomous systems.
    Interactivity is well understood in the framework of poly-contextural systems. Therefore I propose that parallelism is a special case of polycontexturality.
    [...]

    Parallelism in hierarchies
    [...]
    Parallel Graph Reduction
    [...]
    Problems of resources: sparking, blocking, strategies
    In living systems redundancy is inherent.
    In cloned systems tasks don't have to block each other
    [...]

    Parallelism in Heterarchies
    [...]

    Hierarchical parallelisms in Heterarchies
    [...]

    A very first step of modeling poly-contextural parallelism
    [...]
    Implementing the mirroring metaphor
    Following the steps I developed above it should be evident that the mirrored objects are not stored in some memory outside the logical construction. This process of mirroring belongs entirely to the logical construction of the interaction.
    To give this construction a more physical metaphor we can speak about an implementation of the calculation and the reflections on a stack machine. This means that not only the calculations are realized by their stack but also the positioning of the results at other places are realized by new separate reflectional stacks, stacks inside the polycontextural stack machine which are simultaneously dealing with the mirrored data of the neighbor systems. Each intra-contextural stack has simultaneously a reflectional stack of each of its neighbor systems

    A further step of modeling poly-contextural parallelism
    [...]
    Comments
    [...]
    At each contexture the whole problem statement is mirrored. The other programs, machines, loci knows what is needed and can deliver it if they are ready with their own calcultation. This is different to a retrieval system like in Prolog where results can be retrieved from a common data base. This method needs for itself a store and search mechanism which has to be managed in favor to the main task
    [...]

    [...]

    Complexity measures of calculations
    [...]

    Parallelism in Polycontextural Logic
    Additionally to the well known OR- and AND-parallelism, polylogical systems offer two main extensions to the logical modeling and implementation of parallelism. First the distribution of the classical situation over several contextures and second, the transcontextural distributions ruled by the different transjunctional operators. The distribution over several contextures corresponds to a concurrent parallelism where the different processes are independent but structured by the grid of distribution. The transcontextural parallelism corresponds to a parallelism with logical interactions between different contextures. The logic of parallelism is to distinguish from parallelism in logic [...] as it is developed in classical logic programming languages
    [...]
    Prolog is based not only on its logic, used as an inference machine, but also on its semantics or ontology, realized as a data base. Therefore the process of parallelising has to deal with a deconstructive dis-weaving of the data base's ontology.

    Strategies towards a polycontextural parallelism in Prolog
    [...]

    An intermediate step with Metapattern
    As an intermediate step in the shift of conceptualization from a hierarchical to a heterarchical way of concept building it maybe helpful to use the strategy of metapattern [...]. Metapatterns are used as an new modeling strategy for complex informational systems. Metapatterns are not involved in changing the basic assumptions of programming languages or even their logic as with the PCL approach.
    [...]
    Identity as a network of nodes
    Traditional object orientation assigns identity at the level of overall objects. Context orientation replaces this view of singular objects with that of plurality within the object; the object always neds a context to uniquely identify the relevant part of an overall object, which is what identifying nodes regulate. When behaviors are identical, no distinction between contexts is necessary.
    [...]

    [...]

    Prolog's double parallelism dismantled in polylogical systems
    As mentioned above, Prolog has additionally to its well known parallelism of OR- and AND-procedures, and some others, a new form of parallelism which is introduced by the process of deconstructing, dis-weaving, decomposing, de-sedimenting its basic ontology as presuposed in Prologs data base. This poly-contextural thematization of Prologs ontology goes together with the possibility to modell its classical parallelism into the architectonic parallelsim of polycontextural logic systems similar to the modeling of the graph parallelism of functional languages into the polycontextural architecture.

    Polylogics
    Each dimension, each level, each contexture of a complex object or system has its own ontology and its own logic. Furthermore, on a model-theoretic level, each contexture has its own theory of complete lattices in the sense of Garret Birkhoff which allows a very detailed analysis of the coneptual space of the objects belonging to each contextures. But lattices are not mapping the interactions between lattices of different contextures.

    Polycontexturality proposed or produced
    Instead of postulating polycontexturality as a new option it is possible to show the mechanism who [how] to pass from a mono-contextural ontology to a poly-contextural one. This mechanism is described as a proemiality between logical sorts and their common logical universe. Sorts of a logic can be changed into universes of another polycontextural logic. And from a polycontextural logic the inverse procedure is possible, universes can change their role to sorts.
    To postulate polycontexturality is legitimate because the mono-contexturality of classical logic and ontology is itself postulated and can not be proofed. Their is no proof which deceds between mono- and poly-contexturality inside the framework of classical logic and ontology.
    It seems that Gunther is postulating polycontexturality, referring sometimes to reality, my emphasis on proemiality on the other hand is to produce a mechanism of introducing or generating polycontexturality.
    [...]

    Deconstruction Hierarchies in OOP frameworks

    Tableaux Logics
    [...]

    Why not Grid Computing?
    [...]
    Again, the main problem will be coordination and protection of the grid and the local systems. As long as interaction between computer systems is reduced to information processing there seems to be no big hope to solve these problems.
    The PCL approach offers a concept and maybe a strategy which goes beyond information processing, the trans-contextural interaction, ruled by the trans-logical operators of transjunctions for solving the problems of interaction and separation.
    Interactivity and contextural abstraction
    [...]
    The question of embeddedness of interactive agents is tackled on a structural level. Each machine has its own logic which is not identical to the other machines but the same in the sense of likeness or similarity. From a conceptual point of view there is no difference in logic between different classical computing systems, they are ruled by the application of the identical logical system. Applied logical systems are structurally identical and subsumed to the abstract notion of logic per se.
    Polylogical systems are not applications of an abstract logic but themselves realizations of different logics.
    The big question arise, how can we implement polylogically distributed systems? How can the same logic be different to the logic of the other systems?
    It is important to see that the operation of contextural abstraction enables a clear cut between the local logical operation and the global interactive logical operations inside the very kernel of a polylogical system.
    The proposed contextural abstraction should not be confused by a possible contextual or context abstraction. Contexts are parts of contextures like sorts are parts of a universe (of discourse)."

    Comment
    It is an interesting thought model. But as the author himself says "To postulate polycontexturality is legitimate because the mono-contexturality of classical logic and ontology is itself postulated and can not be proofed. Their is no proof which deceds between mono- and poly-contexturality inside the framework of classical logic and ontology."

    But we have even a bigger problem by the fact that quantum computing already works in certain ways of proemiality and polycontexturality.

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part I
    Minsky's new Machine
    Proemiality and Panalogy
    Cognitive Systems and Panalogy Architectures
    [...]
    This idea of a proemiality between structurally different systems can be brought to a more concrete level as an interplay of the four aspects of a living system "architectonics" "reflectionality", "interactivity" and "positionality". I choose these terms because they show a possible connection to existing work in the fields of AI, robotics, living systems, etc.
    [...]
    The polycontextural approach to cognitive systems postulates that cognitive systems are from the very beginning involved in the interplay of (at least) these aspects of specifications.
    At the time there are some very interesting developments in AI, robotics and other branches, collected by terms like "Cognitive Systems" (DARPA), "Architectures" (Sloman, Minsky) and "Emotion Machine" (Minsky[, Singh]), "Common Sense Interfaces", etc.
    The main background idea and strategy seems to be to introduce multitudes against single monolitical concepts and methods. Slogans like "Multiple ways of thinking", "Diversity of ways of thinking", "Parallel ways of thinking", etc. The introduction of different agents like critics are part of the dissolution of monoliticity of classical modeling in AI.
    Minsky calls one important case of multitudes "parallel analogy" or short panalogy.
    Push Singh is on the way to write and implement in his Ph.D. dissertation The Panalogy Architecture for Commonsense Computing
    The Panalogy Principle: If you 'understand' something in only one way then you scarcely understand it at all - because when something goes wrong, you'll have no place to go. But if you represent something in several ways, then when one of them fails you can switch to another. That way, you can turn things around in your mind to see them from different points of view - until you find one that works well for you now. And that's one of the things that "thinking" means! We cannot expect much resourcefulness from a program that uses one single technique - because if that program works in only one way, then it will get stuck when that method fails. However, a program with multiple 'ways to think' - the way we'll describe in Chapter §7 - could behave more like a person does: whenever you get frustrated enough, then you can switch to a different approach - perhaps through a change in emotional state.
    Minsky
    Sloman's Email
    One aspect of the broader view is the way in which a growing interest in architectures and varieties of forms of representation replaces, or rather subsumes, the older emphasis on algorithms and data-structures.
    By 'architecture' I don't mean what computer engineers used to mean: the low level organisation of a kind of hardware device, e.g. a Turing Machine architecture, or a VonNeumann architecture, or a Vax architecture. Rather the study of architecture includes the study of all sorts of ways of constructing complex functioning systems from many, possibly diverse, components. This includes software architectures, virtual machine architectures, hybrid architectures, etc.
    I expect the study of architectures, especially layered virtual-machine architectures will continue to grow in importance. We may need entirely new kinds of mathematics for this.
    From: Aaron Sloman (A.Sloman@cs.bham.ac.uk)
    Date: May 26, 2003 23:22
    Push Sings's main questions
    In order to explore how to build an architecture of diversity for commonsense computing, my thesis will explore these questions:

  • How can we represent a "way of thinking"?
  • How can we map types of problems to types of ways of thinking?
  • How can we detect problems with the current way of thinking?
  • How can we switch efficiently between ways of thinking?

    Ways of thematicizing [thematizing]
    In earlier papers I introduced some distictions to characterize how we are thematicizing our subject.
    Explanation (Narration, Metaphors, Notions)
    Formalization (Mathematics, Logics)
    Implementation (Modeling, Computer implementation)
    Realization (Construction, real-world performance)
    [...]
    From this point of view the project "Cognitive Systems" with its architectures are situated in the field of Explanation, see Minskys book "Emotion Machines" [(not publicated at the time of writing)], Implementation, Modeling, see SADE, CoGaff [CogAff], SOAR and Realization in the domain of Modeling. This implies Formalization too. But here we observe a very classical situation without any attempts in the direction of the slogan "Multiple ways of thinking". The whole monolitical concept and apparatus of mathematical and logical thinking and reasoning is accepted, at least it is not a topic of the new panalogy program. The same happens to the realization of the model, it has to run on a classical computer, accepting the paradigm of algorithms as formalized e.g. in the Turing machine and the concept of information as formalized by Shannon.

    Nevertheless, to discuss and surpass the limits of the formalization power of mathematics for the realization of artificial living systems was one of the aims at the Biological Computer Laboratory (BCL) in the early days of AI researches.
    The only voice concerning mathematics in connection with the Grand Challenge Project I found in Sloman's email. "We may need entirely new kinds of mathematics for this." But this statement can have itself a multitude of interpretations.
    [...]

    Complementary Work?
    [...]
    [...] What are "Multiple ways of thinking" in logic and arithmetics? One actual answer to this question we can find in the growing approach of Combining Logics (fibring, labelling, weaving formal systems [...]). This trend is not yet recognized by the vision of panalogy. [...] this work [based on panalogy] is not radical enough at all. Because it is based on a monolitical kernel of classical logic. The diversity comes here to a stop at the bottom and ends in monolicity. On the other hand, it is based in its meta-language, category theory and multi-sorted logics, on a monolitic monster at the top.
    That classical logics in all its forms are not enough for the study of cognitive systems maybe well known. Not only Kant and Hegel discovered it, but also Peter Wegner was criticizing the Japan Project from this point of view. Prolog, based on first order logic, is to weak to cope with interaction. But the common strategy of Hegel and Wegner is to avoid logic and to switch to a more speculative or empirical level of modeling, instead of transforming the paradigm of logic itself. There is no reason to believe that logic is something natural like the natural numbers of arithmetics and that it could not be changed as the naturality of the natural numbers can be de-mystified. This challenge is not accepted at all, the result is, again, some regression into non-formal thinking.
    [...]
    "I first presented the idea that Turing machines cannot model interaction at the 1992 closing conference of the Japanese 5th Generation computing project, showing that the project's failure to reduce computation to logic was due not to lack of cleverness on the part of logic programming researchers, but to theoretical impossibility of such a reduction. The key argument is the inherent trade-off between logical completeness and commitment. Commitment choice to a course of action is inherently incomplete because commitment cuts off branches of the proof tree that might contain the solution, and commitment is therefore incompatible with complete exploration."
    [...]
    As mentioned above, Wegners strategy to surpass this limiting situation is not to deliberate the paradigm of formality which is defining the very concept of logic and all the concrete logical systems, but some form of regression to empiricism.
    "Logic can in principle be extended to interaction by allowing nonlogical symbols to be interactively modified (reinterpreted) during the process of inference, for example by updating a database of facts during the execution of logic programs. However, interactive discovery of facts negates the monotonic property that true facts always remains true."
    [...]
    This strategy of extending logical systems by non-logical symbols for modeling interaction introduces into logic some non-logical elements of empiricism. For practical reasons this approach has its merits. Nevertheless, from a structural point of view of operativity and formality nothing has changed. Still the old logic is ruling the situation.
    This strategy of extending the concept of pure classical logic is well know, at least by the work of mathematical linguists. [...] At least, they are all based on a kernel of classical logic. But this kernel is taboo.
    [...] a radical change of the kernel itself [...] closely connected with idea of polycontexturality and proemiality.
    The complementary aspect of Minsky's approach to the polycontextural approach is expressed by the statement
    "We'll try to design (as opposed to define) machines that can do all those 'different things'." Minsky
    The question of definition is a logical one, the process of design belongs to the domains of modeling, simulation, implementation and not to formalization.
    It seems not to be easy to escape the challenge of logics. All the tools and methods of design, programming languages, LISP obviously too, are based on logic. The same is the case for the machines.
    Why should the process of design be restricted by the structure of its classical tools?
    Some complementary aspects of MIT related and PCL related work.
    Diagramm 45

    [work of] 
    explanation 
    formalization 
    implementation 
    realization 
    MIT
    psychology
    Piaget
    linguistics
    common sense
    mono-contextural
    parallel analogy
    classical logics
    monoton vs.
    non-monoton
    semiotics
    as applications
    meta-level
    AI programming
    languages, LISP
    SOAR
    PCL
    philosophy
    foundational studies
    logic, mathematics
    deconstruction
    poly-contextural
    proemiality
    polycontextural
    logics + arithmetics
    morpho-
    grammatics
    kenogrammatics
    coalgebra, fibres
    as modeling
    M[eta] L[anguage]
    as simulation
    and real appl

    Minsky's Architecture: The Six Level Model
    Marvin Minsky offered the Six Level Model from his forthcoming book The Emotion Machine as an initial proposal for such an architecture. This architecture is being developed jointly by himself and Aaron Sloman, and is based on several key ideas:
    1. Use several approaches, at once, to each problem. [...]
    2. Have many ways to recognize and respond to internal and external problems.
    The architecture consists of layers of agents, where each layer is concerned with coordinating, managing, and responding to problems in the layers beneath it. Within each layer, there are 'critics' that detect types of problems in the layers beneath or in the outside world. These then turn on 'selectors' that invoke methods for resolving these problems.
    3. Support many different "ways of thinking". The most important high-level operation is mapping types of problems to large-scale "ways of thinking". Each way of thinking disposes the system to use certain types of knowledge, methods of reasoning, types of critics, and other kinds of resources to solve the problem at hand. This architecture is really a kind of meta-architecture, one that invokes more specific architectures in response to the kinds of problems the system encounters.
    The St. Thomas Commonsense Symposium
    Marvin Minsky, Push Singh, MIT, May 13, 2002

    The Polycontecturality [Polycontexturality] Approach
    As the name suggests, polycontectural logic, polycontecturality in general, is interwoven with multiplicity from the very beginning.
    Because of the complementary thematization of cognitive systems I am working on, it is easy to confuse common terms, like architecture, reflection, interaction, etc.
    As a trial I define a cognitive system as a chiastic entity with the following structural aspects.
    Architectonics
    Reflectionality
    Interactivity
    Positionality
    More details can be found in the following chapter Proemiality and reflectional architectures.

    Architectonics
    The operator of architectonics is the cut.
    Classical science and computing is based on a single cut, the Cartesian cut. This cut is producing the difference of internal and external domaines, states, events. And most other dichotomies are based on this Cartesian decision.
    Architectonics is defined by structural, that is, epistemic cuts. The cut between internal and external domains, the cut of the internal as a self and a model of another self.
    Classical computing is still imprisoned by the simple Cartesian cut: inside/outside. Computational reflection in the sense of Smith tries to escape this frame in introducing differences in the inside of the system.
    A short philosophical remark. Mostly, we are occupied by temporal studies, studies of the temporal behavior of systems, short with time. Even the circus of self-referentiality was a drama of time. Our concept of space is not welcomed because of the fear of objectification of subjective events. Only in recent time, a new emphasis for architectures in the theory of living systems emerges. Architectures are not tectonics.
    [...]
    Towards some cuts inside the cartesian cut
    Object systems
    Meta-systems
    Meta-level systems
    Because there is no theory about the process of cutting, reflectional programming is forced to introduce meta-circular interpreters.
    Architectonics vs. Tectonics
    Sign systems, as the scriptural medium of computation, are structured by their tectonics. This type of hierarchical tectonics is based on a single cut architectonics. But this cut is not part of the sign system, it is much more its blind spot.
    From the point of view of polycontecturality [polycontexturality], sign systems, semiotics, are not structured by architectonics.
    Architectonics are understood as a form of mediation of structural and prozessual, algebraic and coalgebraic, principles.
    Architectonics is not a Ur-ground, a static and eternal fundament, origin of everything. Architectonics is complex and dynamic, giving space for a multitude of beginnings of interacting tectonic systems. Starting a list of questions and possible answers about reflectional blindness.
    What is the blind spot of a program? Answer: Its operating system.
    What is the blind spot of an OS? Answer: Its hardware system.
    What is the blind spot of a computer system? Answer: Its users. Or: Its environment.

    Reflectionality
    In the history of cybernetics and computer science reflectionality was reduced mainly to recursive and self-reflectional concepts. The most famous approache is surely the Y-operator of combinatorial logic for the whole of AI esp. LISP and also well known, but more in the circles of second-order cybernetics the re-entry concept of Spencer-Browns Calculus of Indication. This re-entry concept has destroyed by its simplizity and mysticism all germs of complex architectonics of the early second-order cybernetics [...].
    Introspection
    Reflection
    Awareness
    Self

    Interactivity
    Communication
    Cooperation
    Cocreation

    Positionality
    Embededdnes
    Situation
    Incorporation
    Embodiment
    Classification of "Cognitive Systems"
    Today's approaches to Cognitive Systems are therefore classified as
    Architecture: one cut, internal/external
    Reflectionality: Intentionality, representation of the external world
    Interactivity: communication by means of information
    Positionality: unizity as blind spot
    Classical computing systems are well described as systems with a single cut, where reflectionality is reduced to representation of the world producing information, interactivity occurs as communication, communicating information and the blind spot of classical computing is its positionality.
    This characterization shows clearly the conflict of introducing panalogy architectures in a mono-contextural paradigm.
    What's about some traditional specifications of our understanding of ourselves and the world?
    Metaphysics
    Ontology
    Epistemology
    Gnoseology,
    Logic
    The term cognitive, in cognitive systems, seems not to be very clear. The aspects of affect, emotion, decisions are not necessarily components of cognition. It would be more adequate to name such systems subjective systems as composed by cognitive and volitive functions (Gunther, Cybernetic Theory of Subjectivity).
    Is the very concept of Cognitive Systems or Emotion Machine in itself panalogic? In other words, is the pananalogy [panalogy] of the new approaches, Cognitive Systems and Emotion Machine, mono- or polycontextural?

    Togetherness of living systems
    [...] A more genuine reading of Heidegger gives us some hints to not to confuse anthropology with his strict structural deconstruction of ontology.
    The desire to build a machine with cognitive, emotive and volitive behaviors shouldn't try to implement some sort of classical anthropology and its (child)psychology, but should be guided by the structural analysis of the conditions of being in the world of living systems. This excludes not only (phenomenological) psychology but also biological approaches.
    Architectonics maybe a hard but strict interpretation of "Mit-Sein".

    The so called "Blind Spot" exists only for an analysis of living systems as cognitive systems that is, on the base of representations (Vorstellungen) and information. It doesn't change much if the framework of cognition is set in a more constructivist manner. The same problems of "reflective blindness" emerge. Simply because the restriction of living systems to cognition and all the optical metaphors of mirroring, reflection, and view points arise. The blind spot is mainly a result of cognitive solipsism. The Blind Spot problem is not solved by duplicating it by two cognitive systems instead of only one, as proposed by [a student/researcher of Sloman] (2003). The reason is obvious, there is no structural difference between the two cognitve systems as cognitive systems they are the same, and have in common the general idea of being a cognitive system. In other terms, its performance is an Ego-Ego-relation and not an Ego-Thou-relation.
    It was exactly this Cartesian burden which Heidegger was deconstructing. His Daseinsanalyse is much more volitive, pragmatic than cognitive.
    Cognitive Science as a base of cognitive AI is still dreaming in the Cartesian cage.
    Embodiment, embeddedness, situatedness, etc. are terms in the direction of an abandonment of the dominance of cognitivism.
    Togetherness as structural interactivity. Maturanas structural couplings.
    Mismatches of architectures in interaction

    Conceptual graph of togetherness
    Togetherness can be thematicized first as an interplay of different cognitive systems as Ego- and Thou-systems. Second togetherness can be understood as the mechanism of over-determination as simultaneous realizations of different events at the same "ontological" locus which has its inscribtional realization in morphogrammatics
    "Since the classic approach to identify cognition and volition separately in a closed unit of individual subjectivity has failed we shall approach the problem from a different side. We shall assume that the phenomenon of subjectivity, as manifested by thought processes and decision making, cannot be looked for inside the skin of an individual living body - be that animal or man. We propose instead the following theorem:
    Subjectivity is a phenomenon distributed over the dialectic antithesis of the Ego as the subjective subject and the Thou as the objective subject, both of them having a common mediating environment." Gunther, Cognition and Volition
    [...]
    The minimal structure of togetherness is the proemiality of the quadrupel (volition, cognition, subjetive subject, objective subject) in a co-created common world.
    [...]

    Intra- and trans-contextural proemiality of/between cognitive systems
    Intra-contextural proemiality occurs in the process of introspection of a system.
    Interaction vs. interactivity
    Interaction in the so called paradigm shift of computing and computation [...] is mainly understood as informational interaction.
    The flow of information in the new paradigm is not restricted to the internal flow of information in computers and computer systems but also allows informational communication with a non-computational environment. In this sense computer science finds home to cybernetic approaches, mainly to concepts of old first-order cybernetics.
    Despite of the strong differences between interactive and non-interactive computation as open and closed systems, the informational approach to interactivity is not aware that with the use of the general concept of information the difference between the inside and outside of interacting systems is niviledged to a homogen system of information flow. This information flow of algorithmic and non-algorithmic processes which is basic for the model of interactive computation is the common and homogenizing mechanism of internal algorithmic and external input-output streams.
    If the difference of inside and outside should have any meaning it should be clear that the difference as such doesn't belong to the concept of information. There may be an informational process inside a system and there may be informational processes too in the environment of that system but the change as such from inside to outside or simultaneously from the outside to inside is itself not well understood as an informational process.
    It has taken cybernetics a longtime of research to understand the problematics of this constellation. [...]
    [...]
    reflection vs. feedback
    "[...] For instance, a simple feedback loop is not aware of its behavior, and does not define a reflective system (even if reflective systems often do use some sort of feedback)." For instance, a simple feedback loop is not aware of its behavior, and does not define a reflective system (even if reflective systems often do use some sort of feedback)." [...] Meta-Level Architectures and Reflection [...]

    An example: Switches between arithmetics
    [...]
    ["]Implementing Panalogy
    I will use the term Panalogy to refer to a family of techniques for synchronizing and sharing information between different ways of thinking concerned with the same or similar problems. The term derives from 'parallel analogy'. By maintaining panalogies between ways of thinking, we can rapidly switch from one way of thinking to another.
    We can also make more partial changes like the representation language they are using, the types of assumptions they are making, the methods that are available to them for solution, and so forth. The key idea is to support representing multiple problem solving contexts simultaneously and the links between them [...]["] [Minsky, Emotion Machine]
    [...]
    The main difference between panalogy and proemiality is this. Panalogy is mono-contextural, always only one method is running, not several at once and there is no interactivity between successively different methods. They are applied only one after the other. If one method doesn't work, take another. Proemiality is ruling the interplay of different methods running and cooperating together at once.
    Here, my distinction of different modi of thematicizing comes into play: narrative explanations, formalizations, implementations and realizations.
    I am introducing such patterns of multiple thinking directly into the very concepts and methods of semiotics, logics and arithmetics. And this happens step-wise on all 4 modi of thematization.
    The Minsky approach is still mostly in the mode of modeling of some psychological and linguistic concepts from metaphorics into implementations.
    Modeling means, that there is some knowledge about the subject, e.g. the way of thinking of a child, maybe with the help of Piaget, and then this knowledge has to be transformed into computer simulations.
    The opposite or complementary approach of polycontextural logic is more concerned in constructing new ways of formal thinking and producing new formalisms, formal methods and apparatus, to help to understand the structural problems of natural science, e.g. child psychology and the unsolved paradoxes of the Piagetian approach.
    The wording "Switching between parallel methods of thinking" sounds quite promising, but it doesn't gives us a hint how the switch is working, what is the mechanism of the switch, and, how do we know that we are dealing with the same problem after the switch to another domain. How much is the problem itself transformed by the switch of context? And what is the notion of sameness involved in this switch? What do we mean by "parallel" in this context? [Where is the problem?]
    ["]I will use the term Panalogy to refer to a family of techniques for synchronizing and sharing information between different ways of thinking concerned with the same or similar problems.["]
    The common term between the different domains of panalogy is obviously information. But how can we know that all the domains are ruled by the very same concept of infromation? Why is the term information not in itself panalogical?
    ["]By maintaining panalogies between ways of thinking, we can rapidly switch from one way of thinking to another.["]
    This sounds really good! But, again, how does it work and who is operating these deliberating switches?
    ["]Still, one thing seems common to every such change: In each of our different emotional states, we find ourselves thinking in different ways - in which our minds get directed toward different concerns, with modified goals and priorities - and with different descriptions of what we perceive. [...]
    [...]
    Why don't we stick with one way to think? What are the functions of all those emotions? Our answer is that no one, single technique will help us face every predicament.
    We'll try to design (as opposed to define) machines that can do all those 'different things'.["]
    Marvin Minsky, Emotion Machine
    Minsky's question is "What could cause the change?" and not "How does it happen?" or "What is the mechanism of change?"

    Brainstorming vs. Diamond Strategies
    [...]
    The pananalogy [panalogy] architecture consits of the following components and agents.
    Ways of thinking
    Reflective
    Deliberative
    Reactive

    Brainstorming
    critics
    advocates.
    facilators

    Panalogy
    [...]

    Sing is introducing an interesting list of panalogy operators.
    Environment panalogy.
    Procedural panalogy.
    Sensory panalogy.
    Operator panalogy.
    Category panalogy.
    Ontology panalogy.
    Composition panalogy.
    Realm panalogy.
    Sense panalogy.
    To each key idea I have tried to associate new words: ways of thinking, brainstorming, critics and advocates, reflective critics, and panalogy.
    It is interesting to compare these concepts of "Ways of thinking", "Brainstorming", and "Panalogy" with similar concepts known from the theory of polycontexturality. A possible first correspondence maybe:
    Ways of thinking vs. Polycontecturality
    Panalogy vs. Architectonics
    Panalogy transitions, switches vs. Proemiality
    Brainstorming vs. DiamondStrategies
    [...]

    Panalogy transitions and proemiality
    But Singh is keeping his mechanism of switching between different modes as a well regarded secret. It seems that the very idea of multiplicity, of a multitude of ways of thinking, etc. is in itself interesting enough.
    Singh refers to Minsky's Emotion Machine, there we can find a lot of examples which shows the phenomenon of changing positions. But there is no explanation how these transitions are working. What is missing it seems is an operator which is not only introducing these multitudes but also operates the switches between different levels, standpoints, ontologies, ways of thinking and so on.
    My impression is that all these multitudes have to be pre-given by the designer of the system. It is not clear how the system itself can evolve and change its own framework of complexity
    [...]

    Cognition and Volition
    [...]
    ["]What could cause so dramatic a change? What makes our minds keep switching around? What happens inside a person's brain, to cause such a transformation? This book will argue that when we change what we call our 'emotional states,' we're switching between different "Ways to Think."["] Marvin Minsky
    Gunther seems to be more concerned with the question "How is it possible" and not so much with Minsky's question "What could cause so dramatic change?". Obviously, both, the how- and what-questions are working together.
    [...]
    In the proemiality chapter I have given a semi-formal explanation of the concept of proemiality. Now, cognition and volition, will give an interpretation of this formal concept and will put it into the more familiar context of the cognition/emotion interplay as we know it especially from Damasio and Minsky.
    [...]
    The interplay of cognition and volition doesn't restrict the reasons of switching from one "way of thinking" to another to only emotional events like falling in love, etc. Also cognition as thinking can produce exiting emotions which are motivating or even forcing volition and cognition to a switch. The mechanism of proemiality also guaranties that both modi of existence, cognition and volition, are always simultaneously active, only changing their role of dominance from background to foreground functionality.
    On the other hand, the concept of proemiality is open for the interplay with other behaviors additional to cognition and volition.

    Gunther's Conceptual Graphs in Proto-Structures
    [...]
    From the point of view of polycontexturality, transitions between different ways of thinking can be seen as a switch between conceptual systems in a polylogical complexity. The actual system is the system under attention, the new system as a possibility is in the background. The transition is in this sense also a change of focus between foreand back-ground of simultaneously existing parallel systems. Each point of transition belongs simultaneously to different logical systems. Therefore, a transition is not simply an exchange of information but a structural change of logical systems ruled by the operator of proemiality. Such proemial switches are not restricted to single systems or single ways of thinking. A switch can in itself be of complex structure entailing a multitude of ways of thinking and their changes.
    Proemiality in a multitude of ways of thinking opens up the possibility of non-hierarchic, that is, heterarchic thinking and decision making. In a hierarchical system, the way down and the way up coincide, they have to be the same. And at the end, all path have a common origin as its start or as its end.
    Why do we need a kenogrammatic system like the proto-structure in Günther's diagram? A careful reading of Minsky's and Sing's introduction of pananalogy [panalogy] as a strategy of dealing with "different ways of thinking" shows that they don't offer an answer to the question "Where are these different ways of thinking localized?". We can switch from one method to another, but we are not informed where they are localized structurally. What is the logico-structural difference between the different ways of thinking? They must occupy at least a different place in the whole system. The problem now is, that the places are not parts of methods, ways of thinking, but their condition. Places are opening up the possibilities of different ways of thinking, but they don't belong to a way of thinking in the sense of the distributed methods.
    It is ecxactly the job of the proto-grams of the kenogrammatic proto-structure to offer a location to these different methods and logics, that is, to the different hierarchical conceptual graphs. This is not the only aspect but probably the most fundamental.
    Each hierarchical conceptual graph starts with its root. All roots are different from each other. There is no common root in this scenario. To realize their interplay they have not only to be different but also to be located at different kenomic loci. Their mutual interplay is ruled by the proemial relationship, their difference is inscribed by their kenogrammatic localization in the proto-structure. A more complex differentiation of the kenomic locus is given by the deutero- and the trito-structure of kenogrammatics.
    Therefore, the interactive interplay between different hierarchical conceptual graphs with its tree structure is ruled by the proemial relationship, also understood as chiasm, between different roots and their branches distributed over different loci. Obviously, locally, each tree is realizing an order relation, between roots and branches we observe an exchange relation, and the relationship between roots of different loci and branches of different loci is realized by the coincidence relation.
    Despite the graphic form of the diagram of proto-structure, it is not a hierarchical system, and there is also no need to postulate a beginning as an ultimate root of the system. It is a grid of different kenomic loci.

    Common Sense Agents and Ambiguity
    "Understanding natural language also requires inferring hidden state, namely, the intention of the speaker. [...]
    Reasoning allows us to cope with the virtually infinite variety of utterances using a finite store of common sense knowledge. Problem-solving agents have difficulty with this kind of ambiguity because their representation of contingency problems is inherently exponential." [...]
    A mechanical system doesn't know the intention of the speaker. Therefore it has to analyze the sentence and to chose in parallel all grammatically possible interpretations, also it has to go on in parallel with the two interpretations until there is new knowledge, from the past or from the new experiences, which enables a decision, which interpretation of the sentences should be preferred in the actual context, or situation. But the old interpretation will still be a possible choice for the case that the narratives turns back to a new context in which this interpretation will have its own significance and will prevail.
    It is also possible, especially in esthetic texts, that both interpretations are of equal importance, and that there is an ambivalence played by the game of interchange between background and foreground positions of the interpretations. [...]
    All these maneuvers are possible only in a real parallel and grammatically or semantically multi-layered system. Probably the best candidate for this job, again is poly-contextural logic.
    From a technical point of view of poly-contextural systems there is no reason to think that the complexity of dealing with ambiguity has to grow exponentialy.
    [...]

    Comparatistics of Models and Metaphors of Machines
    Minsky Machines vs. Gunther Machines
    Emotion Machine and Volition Machines
    "On the other hand, a machine, capable of genuine decision-making, would be a system gifted with the power of self-generation of choices, and the acting in a decisional manner upon its self-created alternatives. (...) A machine which has such a capacity could either accept or reject the total conceptual range within which a given input is logically and mathematically located." Günther, Decision Making Machines, 1970
    "We linked many-valuedness with self-reference. No self-reference is possible unless a system acquires a certain degree of freedom. But any system is only free insofar as it is capable of interpreting its environment and choose for regulation of its own behavior between different interpretations. The richness of choice depends on the magnitude of the value-excess offered by the logic which follows." (Günther 1968, 44)

    Turing Machines vs. Gurevich Machines

    Wegner Machines vs. Turing Machines

    Minsky Machines vs. Derrida Machines
    The root problem
    Many ways of thinking, Panalogy, does they have a common root, or even a[n] ultimate origin?
    What's about Deleuze/Guattari and all their machines?

    Keno Machines vs. Sign Machines (Markov Machines)
    Schmitthuber's Gödel Machine vs. Kaehr's [or C.S.'] Gunther Machine"

    Comment
    For sure, the author can discuss forever whatever he wants to, specifically in relation to our original and unique Evoos and its many plagiarisms (see below), but the original and unique idea and expression of idea is already given with our Evoos, which is supported by all of his explanations and contradictions. Eventually, it was taken as source of inspiration and blueprint. Period.
    In fact, we are talking here about a Holonic Multi-Agent System (HMAS) or simply a Holonic Agent System (HAS) in relation to a cognitive architecture, coginitive system, and Cognitive Agent System (CAS).
    We also have the Belief-Desire-Intention (BDI) paradigm or architecture, and the Intentional Programming (IP) paradigm (see also the chapter 8.3 Wachstum des Betriebssystems=Growth of the operating system of The Proposal), as well as the field of Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot), which includes what is called panalogy architecture, as integrated by Evoos, because an intention of the BDI MBAS is IP and a characteristic property of IP is to use whatever fits to solve a problem.

    We quote and translate an online encyclopedia about the subject Intentional Programming: "In computer programming, Intentional Programming is a programming paradigm developed by Charles Simonyi that encodes in software source code the precise intention which programmers (or users) have in mind when conceiving their work. By using the appropriate level of abstraction at which the programmer is thinking, creating and maintaining computer programs become easier. By separating the concerns for intentions and how they are being operated upon, the software becomes more modular and allows for more reusable software code."
    "Intentional Programming is a programming paradigm. It refers to the approach of moving away from the conventional source code as the sole specification of a program in order to express the programmer's intentions in a better way through a variety of respectively suitable specification options."

    Furthermore, we also have the field Intelligent Virtual Environment (IVE) and the related

  • original works, like for example the document titled "Integrating Reactivity, Goals, and Emotion in a Broad Agent"
  • suspicious works, like for example the documents titled "Social Interaction Framework for Virtual Worlds (SIF-VW)", "CoMMA - Multiagent Planning and Scheduling (CoMMA-MAPS)", and "CoMMA - Cognitive Architecture for Social Agents (CoMMA-COGs)", and
  • plagiarisms, like for example the documents titled "Agent Chameleons: Agent Minds and Bodies", "Agent Chameleons: Virtual Agents [Powered By] Real Intelligence", and "NEXUS: Mixed Reality Experiments with Embodied Intentional Agents", and in this relation we already showed our fusion of realities in this and other relations. In the 5 Zusammenfassung==Summary of The Proposal C.S. proposed an assignment of the physiological senses and the muscles for the "informational communication with a non-computational environment".
    We showed this even more and better with our Caliber/Calibre.

    While reading and quoting this chapter, we finally got the confirmation for a long-standing suspicion: In fact, from the very first view we had this specific impression that the Panalogy Architecture and the related Emotion Machine (EM) of Minsky and Singh are based on our works. Now, we got the proof that it is based on our work of art titled Analyse und Entwurf eines Betriebssystems nach evolutionären und genetischen Aspekten==Analysis and Design of an Operating System According to Evolutionary and Genetic Aspects, also titled Evolutionary operating system and abbreviated as Evoos (see also the quotes and comments to Child like Computing above based on the work of Jean Piaget).

    Even better, our point of view on the cybernetical matter, including proemiality and polycontexturality, specifically what we discussed in relation to linguistics, many-valued logics (e.g. fuzzy logic), fibring logics, parallelity and concurrency, and distributed computing, is supported.

    Even more better, we have both works with their complementary aspects in Evoos, as we said in relation to our working philosophy of integration, unification, and fusion, the

  • objective, monocontextural approach based on classical logics and
  • subjective, polycontextural approach based on non-classical logics, and cybernetical logics, including polylogic, and PolyContextural Logics (PCL) or subjective logic, and on arithmetics (see Algorithmic Information Theory (AIT) and prime factorization in The Prototype)

    which are compared in the Diagramm 45 on page 219 marked page 170 of chapter Minsky's new Machine in the draft titled "Derrida's Machines Part I [...]".

    Interestingly, we came to our conclusion due to the lack of subsymbol and symbol grounding with proemiality and polycontexturality respectively due to the unsolved linguistic problem with them, but he proponents of the Proemial Relationship Model (PRM) and PolyContextural Logics (PCL) claim that any attempt to use mathematical linguistics to extend classical logics would remain in the scope of a kernel of classical logic, which is taboo for them. But they are unable to solve the Symbol Grounding Problem (SGP) with their metaphysical 'Not is not Not' fantasies and therefore without linguistics. :D
    So we do need both and an integration of (the best of) both. Oh, ... what? An Evoos. :)

    In relation to the discussion of
    "Architectonics vs. Tectonics
    Sign systems, as the scriptural medium of computation, are structured by their tectonics. This type of hierarchical tectonics is based on a single cut architectonics. But this cut is not part of the sign system, it is much more its blind spot.
    From the point of view of polycontecturality [polycontexturality], sign systems, semiotics, are not structured by architectonics.
    Architectonics are understood as a form of mediation of structural and prozessual, algebraic and coalgebraic, principles.
    Architectonics is not a Ur-ground, a static and eternal fundament, origin of everything. Architectonics is complex and dynamic, giving space for a multitude of beginnings of interacting tectonic systems. Starting a list of questions and possible answers about reflectional blindness.
    What is the blind spot of a program? Answer: Its operating system.
    What is the blind spot of an OS? Answer: Its hardware system.
    What is the blind spot of a computer system? Answer: Its users. Or: Its environment."

    In fact, we have already recognized and thought through everything in 1999, the fields and their complementaries, their integrations, their architectures, and so on.
    What the author has overlooked is exactly what the Agent Chameleon project, including NEXUS, have stolen: our fusion of realities due to these blindspots.
    See also the chapters

  • 2.3 Architekturen von Betriebssystemen and
  • 2.7 Neue Anforderungen an Betriebssysteme aus der Sicht der Software-Technologie,

    and also

  • 5 Zusammenfassung with the assignment of real parts and functions of an organism and real parts and virtual functions of a hardware and software

    of The Proposal.

    In relation to the discussion of
    "Proemiality and Panalogy"
    "Cognitive Systems and Panalogy Architectures" versus "Cognition and Volition"
    "Complementary?"
    we can only repeat that cannot see the problems of the author, because Evoos is the ultimate foundation that "ticks all squares". And it is a work of art, an absolute masterpiece in relation to so many points of view, movements of art, and epochs of culture, obviously, doubtlessly, and definitely. It is not like da Vinci, Einstein, Picasso, and so on, but it is me, myself, and I, and also something totally new.

    In this relation, we would also like to note that we came to the root problem through considerations about ontologies and database schemata based on the Structured Entity Relationship Model (SERM), because they

  • are based on a heterarchy of existing entities, in case of SERM many most-left standing entities are allowed,
  • are subject to an inheritent subjectivity, and
  • allow the creation or formulation of virtually infinite many different ontologies, database schemata, and so on.

    Furthermore, they do not define an ultimate common sense, which emerges in a dynamic process of subjective entities, as also discussed in the comment to the quoted works related to the fields of Semantic (World Wide) Web (SWWW), Linked Data (LD), and Dynamic Semantic Web (DSW) below.

    [Here or elsewhere comes a short summary about the plagiarisms Panalogy, Emotion Machine, Evolvable Architectures, H-CogAff, etc.]

    Computational Intelligence (CI) and Soft Computing (SC)
    hybrid agent architecture InteRRaP → Holonic Multi-Agent System (HAS) or simply Holonic Agent System (HAS) → Evoos
    3-layer architecture Cognition and Affect (CogAff) → Evoos → H-CogAff not only after, but because of Evoos
    CogAff → CoMMA - Cognitive Architecture for Social Agents (CoMMA-COGs) and CoMMA - Multiagent Planning and Scheduling (CoMMA-MAPS) → Evoos
    Social Interaction Framework (SIF) → Social Interaction Framework for Virtual Worlds (SIF-VW) (VR) → CoMMA-COGs and HAS and hybrid agent architecture InteRRaP → Evoos (MR, XMR or XR, fusion or NR) → Agent Chameleons and NEXUS (MR, XMR or XR, fusion or NR)

    Sloman virtualization → Evoos ← operating system virtualization
    Cybernetics → Evoos ← ontology-based architecture
    Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot) → Evoos ← ontology-based architecture
    Ubiquitous Computing (UC or UbiC) → (Sloman) Affective Computing (AC or AffC) → Evoos
    UC or UbiC → AC or AffC → Intelligent Environment (IE) → Evoos ← IE ← MBAS or ImRS or Immobot

    Dos Aperion (Apertos (Muse)) → TUNES OS → Evoos
    Aperion (Apertos (Muse)) → Evoos ← CogAff
    Evoos = os + virtualization + Cognitive Architecture and Cognitive System (e.g. Cognition and Affect (CogAff) paradigm or architecture), but not just Cognitive Agent System (CAS) (e.g. Belief-Desire-Intention (BDI) paradigm or architecture).
    classical logics, non-classical logics, transclassical logics, cybernetical logics, polycontextural logics → Evoos → 1. Panalogy → 2. Emotion Machine → 3. Polyscheme

    CA and CS and CAS all deficits of Agent-Based System (ABS), Multi-Agent System (MAS), os, middleware, and VR

    The term ontology used in relation to cybernetics means philosophy (e.g. ontological frame and ontological relativism), but not Markup Language (ML), Semantic (World Wide) Web (SWWW), etc.. Therefore, the Arrow System is not related to the SWWW.

    Cybernetics → proemiality and polycontexturality → Arrow System ? →Distributed operating system (Dos) TUNES OS

    Can our fans and readers see how the author tried to talk down our Evoos and simultaneously fall flat on his big mouth?

    This also explains why the excitement of Professor W. Banzhaf increased in 2000. This is worldwide leading, science shifting fundamental work in fields like philosophy, logics, mathematics, informatics, bionics, AI, ML, CAS, EC, robotics, and so on. We have never claimed otherwise.

    Even much more better, we added so much more ingenious features that they were completely overwhelmed with understanding and stealing all of our Evoos.

    This raises many old questions once again, specifically if they all did know what C.S. was creating since 1998. We would answer it with a 'Yes' and are even able to show evidences and provide forensic proofs in virtually all related cases.

    Sloman and other still existing entities are in need of explanation, because they cannot explain how they come up with exactly the same idea as C.S. for their proposals about 4 or 5 months later.
    Specifically, Sloman's project proposal for the funding foundation does not help here either.
    Furthermore, the project Evolvable Architectures and the close collaboration of Minsky and Sloman, as well as the attempt to map the cybernetic aspects to logical and mathematical ones, especially to their own older works, show that collectively they have not only used our original and unique work of art as a blueprint, but that they have stolen it deliberately.

    Our claim is also supported by the fact that all bees were dancing around our Evoos, but not around any other bee. For example, the TUNES project is mentioned in relation to Derrida's Machines, but the author only discusses Minsky, Sloman, and Singh, and hence our Evoos, but not the Arrow System. Obviously, this must have something in common with the truly original and unique work of art, our Evoos. Is not it?

    As in the case of the Cognitive Agent System (CAS), we also had all the years the impression that Berners-Lee has observed our actions as well and then taken the part related to ontology, Knowledge Querying and Manipulation Language (KQML), and so on for the Semantic (World Wide) Web (SWWW). Indeed, the SWWW is to some extent complementary to our Evoos and the works, which suddenly emerged by pure happenstance (not really) in the field of Multi-Agent System (MAS). Is not it? Problem, we had already all together and presented our complementary part and other original and unique parts with The Proposal (see the Clarification of the ... (where all webs (WWW, Ubiquitous C and IoT, and Immobot WWW, P2P MAS, SWWW, DSWWW, 3D, Web3, etc. come together with our OS, including our Evoos)).

    What we have to do in our publications, explanations, clarifications, and investigations is to keep the basic properties of our Evoos separated from the basic properties of our OS and to classify and compare the truly relevant prior art, because other entities are free to use them but not our integration of them, which is what the industries is doing illegally.
    But we already mentioned in relation to the latter point that certain works, which looked as proper prior art at first, turns out to

  • be some kind of fraud, or
  • be irrelevant for the causal dependencies in relation to the historical classification, technological progress, and legal assessment,

    because they

  • classify,
  • contradict, or
  • annihilate

    each other in the overall hollistic analysis of the issue.
    For example, the Arrow System in relation to the TUNES OS is basically the Binary-Relational Model (BRM), the Proemial Relationship Model (PRM), the PolyContextural Logics (PCL), and merely discusses the bootstrapping approach of our Evoos, which raises even more questions due to the many inconsistencies and deficits.
    There is also a certain relation to the reflective Distributed operating system (Dos) Aperion (Apertos (Muse)), which already has most of the properties of the TUNES OS.

    We will correct our related publications, explanations, clarifications, and investigations accordingly.
    But we are very sure to have found a grounding.

    As we already mentioned, the works are plagiarisms to a large extent, because they copy parts of the original and unique expression of idea presented with The Proposal describing our Evoos:

  • Sloman, A.: Architectural requirements for human-like agents both natural and artificial. (what sorts of machines can love?). 2000.
  • Sloman, A., Scheutz, M., Logan, B.: Evolvable Architectures for Human-like Minds. 2000.
    The work is about the Cogntion and Affect (CogAff) architecture (reactive and deliberative respectively hybrid, and reflective architecture), specifically its human-like variant of the CogAff architecture designated H-CogAff, and the fields of evolvable cognitive system, cognitive architecture, and architecture-based ontology among other topics, which is the Evolutionary operating system (Evoos) and its Evoos Architecture (EosA).
  • Minsky, M.: DRAFT Future Models for Mind-Machines. 2000.
    The work is about thoughts for what become The Panalogy Architecture and The Emotion Machine.
  • Sloman, A., et al.: SimAgent-Tools for Designing Minds (A toolkit fo philosophers and engineers). 30th of January 2001. 13th of October 2003. 11th of February 2004. 6th of June 2006.
    The work is about the human-like variant of the CogAff architecture designated H-CogAff, which is the Evolutionary operating system Architecture (EosA), and ontology among other topics.
  • Sloman, A.: Beyond Shallow Models of Emotion. 2001.
  • Cassimatis, N.: Polyscheme: A Cognitive Architecture for Integrating Multiple Representation and Inference Schemes. 9th of November 2001
    The work was written by a student/researcher of Minsky and is about Cognitive Agent System (CAS) and common sense. Parts were taken from our Evoos, which already integrates classical logics, non-classical logics, and cybernetical logics (e.g. holologic, polylogic, PolyContextural Logics (PCL)). Note that the latter comes also through the TUNES OS, which is closely connected to the Arrow System, which is about the Proemial Relationship Model (PRM) and PolyContextural Logics (PCL), which again is the basis for Common Sense Computing (CSC).
  • Sloman, A.: Varieties of affect and learning in a complete human-like architecture. March 2003.
    The work mentions Minky, M. The Emotion Machine. 2006 and reactive and deliberative respectively hybrid, and reflective architecture, which in this context includes the H-CogAff architecture respectively the Evolutionary operating system Architecture (EosA).
  • Singh, P.: The Panalogy Architecture for Commonsense Computing. 2003.
    "We need a new way to think about how to organize the agents in a common sense system; we need an architecture of diversity for commonsense computing (McCarthy, et al., 2002) - a framework that supports the organization of arrays of diverse and imperfect methods in order to build societies of agents that together are highly resourceful and robust. Building on recent proposals by Minsky and Sloman of the architecture of a person's mind [(reactive and deliberative respectively hybrid, and reflective respectively the Evolutionary operating system Architecture (EosA))] (Minsky, forthcoming; Sloman, 2001), I propose to build such an architecture, based on the following key ideas: [...]"
    • Sloman, A.: Beyond Shallow Models of Emotion. 2001.
    • McCarthy, J., Minsky, M., Sloman, A., Singh, P., et al.: An architecture of diversity for commonsense reasoning. 2002.
    • Minsky, M.: The Emotion Machine. forthcoming. 2006.
  • Kennedy, C.M.: Distributed Reflective Architectures For Anomaly Detection And Autonomous Recovery. June 2003.
    See the comment to the quote of the document titled "Derrida's Machines Part 1 [] The new scene of AI: Cognitive Systems?".
  • Minsky, M.: Emotion Machine. 2006.
  • Jiang, H.: From Rational to Emotional Agents. 2007.

    The prior art referenced in these and other similar works, specifically about a layered model of intelligence, which is a reactive and deliberative respectively hybrid, and reflective agent architecture

    respectively the Evolutionary operating system Architecture (EosA), and even

  • Sloman, A., et al.: Sim_Agent Toolkit. 1993.
  • Sloman, A.: The mind as a control system. 1993.
    A Dynamical Systems view of H-CogAff, which was presented in January 2001 or March 2003 (see Talks #24).
  • Darryl Davis (ed.): Visions of Mind: Architectures for Cognition and Affect.
    This book includes the proceedings of the workshop of the Symposium: How to Design a Functional Mind (at AISB 2000) (The DAM -- 'Designing a Mind' -- symposium) A symposium on "How to Design a Functional Mind'' was held at the AISB'00 Convention at the University of Birmingham 17-20 April 2000. 2004.

    cannot overcome this fact, but provide more evidences that without any doubts provide causal links with our orignal and unique works of art.

    The same holds for the fields of Multimodal User Interface (MUI), Intelligent Mixed Reality Environment (IME), fusion of realities, Autonomic Computing (AC), and also smart contract, blockchain, Web x.0, Ontoverse (Ov) (metaverse multiverse), and so on as discussed in the past.

    It was quite interesting to find out, who played foul and how entities played even in relation to our Evoos. In this case, the author has very good connections to the F.R.German clique around SAP and the other joint venture partners of the company DFKI, the former fake company Ontoprise besides Ontotext, and Siemens, but also the MIT, as usual, and even University of Birmingham. Marvin Minsky, Peter Voss, and Ben Goertzel, as well as British Telecom, France Telecom, Telecom Italia, and Deutsche Telekom knew all the time what was going on, but we are wondering if Aaron Sloman also knew the truth all the time, though his email about "layered virtual-machine architectures" and at least the thesis of one of his student, who knew exactly what was going on, show that they very both taking our Evoos as source of inspiration and blueprint. In the cases of the author Rudolf Kaehr and also Peter Wegner there is not doubt, specifically by their individual attempts to confuse the public about the ture origin of our Evoos, but we are not sure when they joined that party and jumped on the bandwagon, too.
    But our holistic view also showed that virtually all bees were dancing at exactly the same time and are still dancing around exactly the same something with our Evoos at the center, which must have been and obviously was the only new thing at that time indeed. It cannot explained otherwise by showing an alternative ordinary technological progress or any other progress. Is not it?
    For sure, our observations, collections, and comparisons of facts allow certain implications, conclusions, and also claims.
    By the way: Do not confuse John Laird and Johnson-Laird, Philip.

    Please note that the discussed architecture of a person's mind belongs to the cybernetic self-portrait of C.S. in whole or in part and we will definitely not allow anybody to mess around with it.

    And we would also like to recall that The Proposal, which is publicated on the website of OntoLinux, is the second version and the working version of what became The Prototype, because we have the original files stored on an old, unconnected harddisk in our safe and not scanned the first version printed, presented, and discussed, and also shared on the 10th of December 1999. We are already looking in our safe for the original to fix this.
    Maybe it is time to look on the old harddisk to find some more informations and evidences.

    Also note that the project began in the summer of 1998 after looking once again into the book titled "Bionik: Natur als Vorbild" and publicated in 1993 (see the chapter Evolutionsbionik, which we republicated exactly for this reason) and getting the idea of a Genetic operating system. To our big surprise and happiness, the library of the University Dortmund, in which we were nearly 2 times each week searching through all the books for more input, was expanded with new shelves at this time and therefore we can exactly remember that this was the time when it bought around 25 books of W. Banzhaf "Genetic Programming". Even more surprising, he held his lecture in this field in the winter semester 1998/1999 for the first time. We even still have the card from him to get a discount when buying the book. Somehow, after one lecture we met on the parking lot and began to talk about the matter and C.S. mentioned the idea of an Genetic operating system. Howsoever, around January or February to March 1999 the initial talk developed into the discussion about a diploma thesis and it took around 6 months to get the thing together as publicated in The Proposal. In the course of this research phase, we also found the email of Eric "Astro" Teller.
    And to set the historical record straight we also repeat that we did not look at others, but just did our thing and observed that our things emerged outside our office all the years, and therefore even did not know Aaron Sloman at all until around the year 2005, when we began with the implementation of our Evoos respectively OS and were looking for a multiparadigmatic programming environment. To our great surprise but also happiness, the Sim_Agent Toolkit virtually was a perfect match, but only because we did not want to use the Java programming language, BlackBoard (BB) (e.g. Tuple Space (TS)) system JavaSpaces, Java Jini, and so on and prefered an extension of C++, which became the Intentional Programming (IP) paradigm, the Java Jini clone .NET framework, and the C# multiparadigmatic programming language (see also the chapter 8.3 Wachstum des Betriebssystems=Growth of the operating system of The Proposal). Now, we do know why.
    The rest can be found in our related clarifications and investigations.

    And we have the impression that they still have not understood our Ontologic System (OS) or try to continue with that fraud.
    In fact, our Evoos already covers the whole space of possible

  • sets of requirements for architectures (niche space, since a niche is a set of requirements) and
  • designs for architectures (design space) based on thes requirement,

    which is the result of the examination and addressing of

  • all possible evolutionary, developmental, learning, social, cultural, and ecological trajectories,
  • all ways of validation and verification, or assurance regarding needs and purposes respectively doing the right thing and evaluation regarding regulation, requirement, specification, or imposed condition respectively doing it right,
  • all sorts of virtual machinery and the various designs for such virtual machinery, including evolution of protocols for interactions of various sorts between components, between hardware systems, between virtual machine sub-systems, etc.,
  • all sorts of realities, and
  • all kinds of intelligent system,

    which are not only common sense, but truly make sense at all in the observable universe and slightly beyond.
    Our OS is the synthesis of the whole field and always fits with its singular Ontologic System Architecture (OSA), which integrates all in one on the basis of our (smart) molecular or liquid system composition approach as part of our Ontologic(-Oriented) (OO 3) paradigm, as we already explained. Some of the best examples for this general property are the fields of

  • ontology, ontologics, and our Ontologic Zero or Ontologic Null respectively Zero Ontology or Null Ontology,
  • Logics, including classical logics, non-classical logics (e.g. many-valued logics, dynamic logics), and cybernetical logics {classification correct?} (e.g. holologic, polylogic, PolyContextural Logics (PCL))
  • Binary Relational Model (BRM),
  • Polygonal Data(base) Model (PDM),
  • Proemial Relationship Model (PRM),
  • Abstract Abstract Machine (AAM) (there exist a lot of AMs coming before VMs),
  • Virtual Virtual Machine (VVM),
  • operating systems (oss),
  • Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia Programming (M⁶P) paradigm,
  • Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia Computing (M⁶C) system,
  • Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia User Interface (M⁶UI), and
  • Ontologic System Components (OSC),

    and even our

  • Caliber/Calibre and
  • Theory of Everything (ToE) based on our Caliber/Calibre, which also includes the whole observable universe and slightly beyond besides the individual entity and its environment, and their evolution, and so on.

    The same holds for the fields of Multimodal User Interface (MUI), Intelligent Mixed Reality Environment (IME), fusion of realities, Autonomic Computing (AC), and also smart contract, blockchain, Web x.0, Ontoverse (Ov) (metaverse multiverse), and so on as discussed in the past.

    Eventually, one can see how visonary and far-reaching our Evoos already was in 1999 and how important it is for everything that followed.
    In addition, it becomes now really interesting in relation to our Ontologics®, because everybody can see how visionary and ingenious our masterpiece OS already was in 2006 by integrating all in one.

    The author came 6 years too late to the party and tried to get the sovereignty of interpretation, competence, and leadership in our field, even on the basis of our work of art.
    But at this point that mess does not stop. In fact, there is already the complementary party of authors, that did the same since at least 1999.
    And the icing on the cherry on the cake is that the author is now trying even the same with the other authors.
    We can assure everybody that she, he, or they do not want to know what we think about that.

    This also shows another time the mayor blow for the scientific community, university, and industry concerned.

    Here that unbelievable farce and incredible scandal, or better said, serious crime definitely ends.
    Do they have no morals at all, no self-entitlement, and not even a self-respect to do something so antisocial?

    Please stop that farce, because

  • everybody is knowing since more than 2 decades what is truly going on,
  • we have now the facts lying on the table, which clearly show that our Evoos is the original, but is not mysterious,
  • demasked all plagiarists and fraudsters, and
  • nobody is buying that H-CogAff architecture, Panalogy architecture, Emotion Machine architecture, Meta-Morphogenesis, and whatever other fairytales anymore.

    And the same holds for all those maniacs of OpenAI. You will remove all matter that infringes our copyright voluntarily, for example verified L4 microkernel with AI, or be asked b ythe courts to do so with blacklisting of all entities concerned.

    But if you refuse to stop it, then it goes from the table of us to the desk of a judge as well, because of the lack of proper naming and referencing the true origin of our works of art, performances, and achievements we have several infringements of our rights (e.g. integrities, reputations, etc.) and properties (e.g. copyright), which are not allowed, even not by actors in the field of the arts and the sciences.
    Those handful of entities in the field of Artificial Intelligence (AI) and related fields thought to take the whole world for a ride.

    We quote a document, which is about Derrida's Machines and multiparadigmatic programming and was publicated in 2003 to 2004: "Derrida's Machines Part II
    Dynamic Semantic Web
    [...]

    Wozu Dynamic Semantic Web?*
    SAP INFO 10/2003
    20.10.2003 / Interview mit Prof. Dr. Jürgen Angele, ontoprise GmbH
    Werden Computer uns einmal verstehen?
    *Schaffen Sie mit semantischen Technologien den Sprung von der Verarbeitung von Daten zur Verarbeitung von Wissen?
    Angele: ["]Ja, denn semantische Applikationen "verstehen" Informationen. "Verstehen" setzt eine gemeinsame Sprache voraus, um konzeptuelle und terminologische Verwirrungen, Unklarheiten und Mehrdeutigkeiten auszuschließen. Und genau das lässt sich mit semantischen Technologien erreichen. In einer Ontologie werden die für einen Anwendungsbereich relevanten Begriffe und deren Zusammenhänge exakt definiert. Die Ontologie beschreibt ein allgemein anerkanntes Verständnis dieses Anwendungsbereichs, das alle Personen und Anwendungen gemeinsam teilen und verwenden.["
    Ist es das, was wir mit dem DSW wollen?

    Ziel: Was soll erreicht werden?
    Es soll ein Framework für ein Dynamic Semantic Web entwickelt werden, das den Charakteristika des WWW entspricht und nicht bloss auf die Exteriorisierung von Datenbank Systemen aus ist.
    Das WWW wird hier nicht nur als ein offenes System mit den Eigenschaften distribuiert, dynamisch und quantitativ massiv verstanden (Hendler), sondern zusätzlich als ein global-kulturelles, komplexes sich selbst organisierendes und selbst-modifizierendes Medium artifizieller Natur. D.h. auch, dass das WWW nicht vorgegeben (vorhanden) ist, sondern sich nur einer Interpretation in seiner Zuhandenheit erschliesst.
    Die bestehenden Methoden konzentrieren sich auf die Vorhandenheit der Daten im WWW, DSW hat sich der Herausforderung der prinzipiellen Deutbarkeit des WWW, d.h. seiner Zuhandenheit zu stellen.
    Daher ist Wissen (knowledge) und Bedeutung (meaning) in einem WWW als kulturellem System grundsätzlich nicht auf Eindeutigkeit, Disambiguität und Dekontextualisierung zu reduzieren. Dies ist möglich einzig für sehr spezielle Erfordernisse.
    DSW hat somit zum Ziel, Mechanismen zur Handhabung, Implementierung, Formalisierung und Realisierung von ambiguen, kontextbezogenem und vieldeutigem Wissen, das nichtsdestotrotz einer machinalen Verarbeitung zugänglich ist, anzubieten.
    Einige konkretere Ziele
    Es sollen Methoden zur Erstellung komplexer evolutiver Ontologien entwickelt werden, die den Erfordernissen etwa der folgenden Kriterien gerecht werden können. [Bingo!!!]
    1. Ontology Engineering
    Aus der komplexen Datenvielfalt, realisiert in heterogenen Ontologien, einer Organisation, eine vertikal strukturierte einheitliche Ontologie zu generieren, die dann mit den Methoden des Semantic Web verarbeitet werden können, stellt ein grosses und weitgehend ungelöstes Problem dar. Die Effektivität einer Implementierung misst sich jedoch auch an der Effektivität der Aquisition ihrer Daten.
    Eine zusätzliche horizontale Organistionsform kann hier aus Engpässen einer aufgezwungenen Hierarchisierung entgegen wirken.
    2. Distributed inferencing, architectonic parallelity
    Distribuierte Inferenzmechanismen lassen sich aufgrund der polykontexturalen Logik ohne Komplikationen direkt realisieren. Je Kontextur bzw. je Modul, lässt sich eine eigene und autonome Deduktionsregel einführen. Dies geht weit hinaus über klassische Ansätze der Parallelisierung und der durch Mehr-Sorten-Logiken fundierten Distributionen.
    3. Meta-Reasoning, Reflektionalität
    Reflektionalität ist der polykontexturalen Architektonik, sowohl auf logischer wie ontologischer Ebene, inhärent. Entstammt sie doch dem Bestreben, eine Theorie und einen Apparat der Reflexionsformen zu realisieren.
    4. Reusability
    Wiederverwendbarkeit erhält durch die tabulare Anordnung der Module eine neue Dimension, die durch die vertikale Konzeption allein nicht realisiert werden kann.

    Einschränkung: Was soll nicht erreicht werden?
    Es geht bei dem DSW Projekt, trotz des fundamental neuen Ansatzes, nicht darum, Bestehendes in seiner konkreten Definition und Funktionalität zu kritisieren. Oder gar als falsch aufzuweisen. Einfach deswegen nicht, weil der PKL-Ansatz einzig und allein versucht, von anderen, eventuell allgemeineren Voraussetzungen, jedoch mit weit weniger ausgereiften Technologien, an eine gemeinsame Problematik heranzugehen.
    Es geht aber auch nicht darum, mit den bestehenden Ansätzen, die sich auf spezifische Fragestellungen spezialisiert haben [...] in Wettlauf oder gar Konkurrenz zu treten.

    Methode: Wie und womit soll DSW erreicht werden?
    Web Ontologien bestehen aus Modulen, die vertikal organisiert werden und somit eine Dynamik der Evolution, Adaption und Erweiterung im Rahmen einer systematischen Hierarchie ermöglichen.
    DSW erweitert dieses Konzept der Modularität dahingehend, dass alle, auch die Basis-Module, horizontal organisiert werden können. Damit entsteht ein System ontologischer und logischer Parallelität und Nebenläufigkeit, das vertikale Interaktion zwischen den Ontologien und deren Modulen ermöglicht.
    Die horizontale Organisation ontologischer Module soll mit den Methoden der polykontexturalen Logik realisiert werden. Die Polykontexturalitätstheorie stellt logische und ontologische Methoden der Vermittlung und Distribution modularer Systeme bereit.
    Dabei kann jeder Modul innerhalb einer horizontalen Organisation selbst wiederum vertikal hierarchisch strukturiert sein. Damit ist ein flexibler und kontextbezogener Wechsel zwischen der horizontalen und der vertikalen Funktionalität gewährleistet.
    Die Möglichkeit des Wechsels zwischen horizontaler und vertikaler Organisiertheit, oder in a.W. [anderen Worten] zwischen Hierarchie und Heterarchie, stellt die Grundstruktur der Dynamik des DSW dar. Dieses Verständnis von Dynamik stellt ein Novum in der Konzeptionalisierung und Implementierung von logischen und ontologischen Systemen dar. [Bingo!!!]
    Die konkrete Realisierung einer Implementierung von DSW hat sich mit den sich entwickelnden Methoden und Programmiersprachen des Semantic Web produktiv kritisch auseinander zu setzen und Strategien der Erweiterung, geleitet durch die Ergebnisse der polykontexturalen Logik- und Ontologie-Forschung, zu entwickeln.
    Vererbbarkeit und Verwendbarkeit von Methoden
    Damit ist, trotz der Novität des Ansatzes des DSW, Anschluss und Vergleichbarkeit, aber auch Verwertbarkeit des Bestehenden gewährleistet. Denn wenn Module, die in sich vertikal organisiert sind, in eine Distribution und Vermittlung horizontaler Art gebracht werden, lassen sich die Konzeptionen, Methoden, Formalismen und Techniken übertragen. Die vertikalen Methoden vererben sich, wenn auch ev. [eventuell] in modifizierter Form, in die horizontale Struktur. Insofern braucht nicht alles neu erfunden zu werden, um das Projekt des DSW zu realisieren.

    Nutzen: Wozu soll DSW erreicht werden?
    Eine tabulare Organisation ontologischer und logischer Module eröffnet automatisch strukturelle Vorteile einer linear organisierten Struktur gegenüber.
    Transparenz
    Horizontal verteilte Module und Ontologien unterstützen Transparenz aufgrund ihrer relativ autonomen Modularität, die eine Komplexitätsreduktion darstellt.
    Flexibilität
    Horizontal verteilte ontologische und logische Module unterstützen Flexibilität aufgrund ihrer Möglichkeit zwischen vertikaler und horizontaler Organisation zu wählen.
    Disponibilität
    Horizontal verteilte Module und Ontologien unterstützen durch ihre Verteilung über die zwei Dimensionen ihrer Positionierung.
    Effektivität
    Horizontal verteilte Module und Ontologien unterstützen die Effektivität sowohl ihrer Etablierung wie auch der Abläufe ihrer Prozesse, dank ihrer architektionalen Parallelistät.
    Insbesondere werden die Prozesse der Navigation, Negotation und Mediation von und zwischen vertikal und horizontal verteilten Ontologien aufgrund der polykontextural verteilten Organisation unterstützt.
    Navigation
    Navigation zwischen Modulen erhält eine neue Dimension, wenn diese in ihrem Spielraum nicht mehr eingeschränkt wird durch eine übergeordnete, allen gemeinsame Basis-Ontologie.
    Mediation
    Mediation von Modulen ist in vertikalen Organisationsformen äusserst beschränkt und setzt eine allen Modulen gemeinsame Basis-Ontologie voraus. In diesem Sinne handelt es sich bei der vertikalen Mediation letztlich um eine Form der Subsumtion, die nicht in der Lage ist, Fremdes zu akzeptieren und mit Fremdem zu interagieren.
    Negotation
    Wenn auch DSW auf machinelle Assistenz setzt, ist immer noch genug Raum für Verhandlung zwischen menschlichen Subjekten. Diese Negotationen können sich nun aber auch auf formale Modelle der Vermittlung stützen und sind nicht der reinen Willkür bzw. dem blinden Vertrauen (Trust) ausgeliefert.
    Evolution [Bingo!!!]
    DSW soll Grundprobleme der Evolution des WWW und der Semantic Web Ontologien aufweisen und zu polykontexturalen Lösungen verhelfen. Die bestehenden Methoden der Handhabung von Evolution von Ontologien sind auf die vertikale Organisation ihrer Methoden beschränkt.

    [...]

    Zeitrahmen: Wann soll DSW erreicht werden?
    In einer ersten 3 Jahresplanung soll im ersten Jahr eine Konsolidierung der bestehenden Forschungsarbeiten geleistet werden, die in den folgenden zwei Jahren zu einem ausgereiften Prototypen führen sollen.
    Die Emanzipation von den Methoden und Formalismen des Semantic Web in Richtung auf ein polykontextural fundiertes DSW kann nur Schrittweise geschehen.
    Ein erster Schritt ist die kritische Aufarbeitung der bestehenden Tendenzen der Implementierung des Semantic Web bezogen auf Ontologiebildung, Web-Logiken und Impelementierungssprachen.
    Ein weiterer Schritt ist die Abgrenzung von diesen Methoden und die Entwicklung von Erweiterungen der bestehenden Konzeptionen und Methoden des Semantic Web.
    Dies soll in einem vorläufig letzten Schritt zur Entwicklung eines Prototypes einer DSW Implementierung führen.

    Abgrenzungen: Wogegen soll DSW erreicht werden?
    Angesichts der wachsenden globalen kulturellen Dominanz des WWW soll gegen einen reduktionistischen und technizistisch verstandenen und staatlich implementierten Begriff von Bedeutung und Wissen angegangen werden. Damit soll die relative Adäquatheit reduktionistischer Methoden für beschränkte industrielle, administrative und militärische Zwecke nicht geleugnet werden.
    Das WWW ist hier jedoch als ein kulturelles und globales Medium verstanden. DSW versteht sich daher als ein nicht durch den Eurozentrismus reduzierte und auf Aristotelischer Metaphysik basierende Strategie der Eröffnung eines globalen kulturellen WWW.
    Es soll mit dem DSW Denkmodelle und Verhaltensstrategien im Umgang mit dem WWW zur Hand gegeben werden, die eine Verabschiedung vom Aristotelismus in der Ontologie und Logik wie auch der Fixierung des Machinalen auf das Turingmodell zu unterstützen in der Lage sind.
    Es kann nicht übersehen werden, dass nach dem Sieg der technizistischen Denkweise in der und durch die Computertechnologien nun eine entsprechende Vereinnahmung von kulturellen Schichten des Wissens durch das internationale Semantic Web Projekt in Gang gesetzt wurde. Dagegen sind die Bildungseinrichtungen noch gänzlich mit der Adaption an den Digitalismus und seiner Multimedia-Kultur beschäftigt. Die Hilflosigkeit dem Phänomen gegenüber zeigt sich leider auch in der sonst hervorragenden kritischen Arbeit zum Semantic Web [...].
    [...]

    Towards a Dynamic Semantic Web
    Dynamic Semantic Web (DSW) is based at first on the techniques, methods and paradigms of the emerging Semantic Web movement and its applications. DSW is advancing one fundamental step further from a static to a dynamic concept of the Semantic Web with extended flexibility in the navigation between ontologies and more profound transparency of the informational system. Web Services are now redefinded by Semantic Web. To proof the advantages of DSW, it is the main aim of this project to developed the tools and methods necessary to develop a DSW based Web Service (DSW business application).
    The existing framework of the Semantic Web has only very limited possibilities of realizing dynamism. It's dynamism is reduced to inter-ontological transactions (translations, mappings, navigation) between different local taxonomies and ontologies.
    DSW is based on the genuinely dynamic first order ontologies and logics founded in kenogrammatics of the theory of polycontexturality allowing evolution and metamorphosis to create complex interactivity and new domains of interaction. [Bingo!!!]
    A General Metaphor

    The Semantic Web
    [...]
    Today, the Semantic Web is becoming an important reality. Not only in research centres but also in industrial, business and governmental organizations, Semantic Web applications are advancing. Semantic Web is understood as the "Next Web".
    [...]
    As the WWW is based on HTML, the Semantic Web is based on XML as its frame language mediated by ontologies. Ontologies are the new key to meaning in information processing. Also deriving from philosophy where ontology is representing the most general theory about being and the formal structure of everything, in the Semantic Web, ontologies are of a very pragmatical value. "Ontologies are about vocabularies and their meanings, with explicit, expressive, and well-defined semantics-possibly machine-interpretable." [...]
    [...]
    Semantic Web and AI
    [...]
    A sharp distinction between Semantic Web and AI can be made between the relevance and understanding of data and programs. AI is concerned with highly complex programs being at the end able to understand data, e.g. texts and common sense. Semantic Web is more concerned in making its data "smart" and giving them some machine-readable semantics. AI tends to replace human intelligence, Semantic Web asks for human intelligence.
    On the other side it seems that Semantic Web is lacking, at least today, strong and complex logics, automated deduction systems and inference machines. Topics which are well developed in AI research and applications.
    [...]
    It is well known that AI has produced a lot of knowledge about Knowledge Representation systems, Concept Analysis and many other semantic based endeavours. Nevertheless, Semantic Web takes a new start on a more pragmatic level, with a more business oriented vision and from an other angle of the whole spectre of "mechanizing" knowledge and interactivity.
    Ontologies
    The Semantic Web is based on its ontologies. Ontologies are playing the key role in the process of realizing semantic information processing. Ontologies are themselves classified in several types. The most general case is the distinction between core ontologies and upper-level ontology. There are many core ontologies but only one upperlevel ontology. The structure of ontology (and ontologies) is strictly hierarchical.
    What are the promises?
    "What are the real values for using ontologies? The real value of using ontologies and the Semantic Web is that you are able to express for the first time the semantics of your data, your document collections, and your systems using the same semantic resource and that resource is machine-interpretable: ontologies. Furthermore, you can reuse what you've previously developed, bring in ontologies in different or related domains created by others, extend yours and theirs, make the extensions available to other departments within your company, and really begin to establish enterprise- or community-wide common semantics." [...]
    RDF (Resources Description Framework)
    [...]
    A description is a set of statements about the resource.
    The RDF model is often called a "triple" because it has three parts: subject, predicate, object.
    Subject: This is the resource that is being described by the ensuing predicate and object.
    Predicate: This is a function from individuals to truth-values with an arity based on the number of arguments it has.
    Object: This is either a resource referred to by the predicate or a literal value.
    Statement: This is the combination of the three elements, subject, predicate, and object. [...]
    All this is governed by the principle of identity.
    [...]
    This linguistic characterization of the RDF triple is defining a statement and adding to its syntax some meaning guarantied by the identifiable IDs. This relation is decidable, that is, the connotation exists or it exists not, therefor it is true or false-TND.
    Missing linguistic contexts
    At this point I would like to mention, that despite of its semantic relation and its foundation in a generally accepted ontology, this RDF triple is defining a statement in isolation, excluding its context. Later, contexts are introduced by ontologies. But the RDF definition is not involving them. As a consequence, all pragmatic points of views have to be introduced secondarily. It would be helpful, if we could introduce this contextual information at the very beginning of our construction. Without this we will simply repeat the paradoxes of knowledge engineering of the AI projects. That is, meaning of a sentence is context-dependent and contexts are defined by meaningful sentences.
    The Semantic Web Stack
    [...]
    Tim Berners-Lee's three-part vision: (collaborative web, Semantic Web, web of trust).
    [The image of the initial Semantic Web Stack is shown, which was presented on the 6th of December 2001.]
    [...]
    Problems with trust and signature
    [...]
    Hierarchies everywhere
    Taxonomies
    A taxonomy is a semantic hierarchy in which information entities are related by either the subclassification of or the subclass of relation.
    [...]
    Diagramm 1 UML hierarchy diagram of a General Ontology [(GOL)]
    [...]
    Uniqueness means that there is one and only one ontology defined in terms of Urelement, Set and Entity. This also means, there is only one World, and at the end it means, there is only one WWW, too. But this is homogenizing complexity and diversity, and is simply a monstrous nomiminalisation. In other word, it is one and only one way of thematizating the world, the mono-contextural one.
    The development of an axiomatized and well-established upper-level ontology is an important step towards a foundation for the science of Formal Ontology in Information Systems. Every domain-specific ontology must use as a framework some upper-level ontology which describes the most general, domain-independent categories of reality. For this purpose it is important to understand what an upper-level category means, and we proposed some conditions that every upper- level ontology should satisfy. The development of a well-founded upper-level ontology is a difficult task that requires a cooperative effort to make signicant progress.
    Diagramm 2 Axiomatic Foundation of Upper-Level Ontologies
    [...]
    All these axioms of the formal general ontology GOL are not only defining a (probably) consistent framework for all possible applicative, core ontologies, but are also asking a hard price for it: there is no dynamics in this framework of ontology. Everything is what it is, e.g. Urelement or Set. Any dynamics is secondary and localized in "chronoids", "topoids", etc. which are special cases of Individuals. In other words, no Urelement can become a set and vice versa, simply because this ontology is mono-contextural, lacking any fundamental perspectivism and interactivity with diversity.

    How to introduce the Dynamic Semantic Web?
    [...]
    It is a philosophical question if this branch is well understood as branch and should not be better thematized as something quite different, namely as an interlocking mechanism between core and upper ontologies and their logics distributed over different irreducible upper ontologies.
    From a pragmatic point of view, DSW is better localized as a new branch or discipline of the Semantic Web.
    The map of the Semantic Web assembles all sorts of theories, methods, implementations from philosophy to hard core programming, including AI and data-base technologies, logics, semantics, context theory, linguistics, neural networks, etc. on all levels of scientifity and scholarship, not excluding some confusions and other cocktail events.
    This is allowing a great diversity of different approaches to be involved in the development of the Semantic Web and its extension to the Dynamic Semantic Web, and many other invention, too.
    Decentralization and Heterogeneity
    To deal in a flexible and controllable way with decentralized heterogeneities, hierarchies are not delivering the best possibilities. Here is the moment where heterarchies come into the play.
    Decentralization and Heterogeneity is obviously in conflict with the strict reglementations of upper-level (first order) ontology as it is formalized in the general ontology GOL.
    Two different contexts relating respectively to species and environment point of view.
    With such different interpretations of a term, we can reasonably expect different search and indexing results. Nevertheless, our approach to information integration and ontology building is not that of creating a homogeneous system in the sense of a reduced freedom of interpretation, but in the sense of navigating alternative interpretations, querying alternative systems, and conceiving alternative contexts of use.
    To do this, we require a comprehensive set of ontologies that are designed in a way that admits the existence of many possible pathways among concepts under a common conceptual framework. This framework should reuse domain-independent components, be flexible enough, and be focused on the main reasoning schemas for the domain at hand.
    Domain-independent, upper ontologies characterise all the general notions needed to talk about economics, biological species, fish production techniques; for example: parts, agents, attribute, aggregates, activities, plans, devices, species, regions of space or time, etc. [...]
    [...]

    Heterarchies, in general
    In contrast to the Semantic Web with its tree structure, that is, with its fundamental hierarchic organization on all levels of conceptualization and realization, the Dynamic Semantic Web comes with a strong decision for heterarchies.
    Heterarchies are not fully understood if we are not studying the interactivity between hierarchies. In this sense heterarchies are the framework of the interactivity of hierarchies. In other words, heterarchies are ruling the interplay between an irreducible multitude of different trees.
    One great advantage is, each of these trees is inheriting the well known and proven methods and technologies of their classical predecessor, that is, logics, taxonomies, proof systems etc.
    "Whereas hierarchies involve relations of dependence and markets involve relations of independence, heterarchies involve relations of interdependence."
    Stark has proposed "Heterarchy" to characterize social organizations with an enhanced capacity for innovation and adaptability.
    Networked or lateral organizations are in direct contrast with the tree-like, vertical chains of control of traditional hierarchies. The second feature means that heterarchies require diversity of components and building blocks." [...]
    [...]
    To give a more transparent modeling of the interactivity between hierarchies as it is proposed by the proemial relationship it maybe helpful to set the whole construction and wording into an UML diagram and to use the modeling of heterarchy [...] as a helpful tool to explicate proemiality in terms of UML modeling.
    [...]
    Diagramm 3 UML heterarchy diagram
    [...]
    Abstract theories
    Each hierarchy has its own ontology, logic, algebra, proof systems etc. To give an idea of the concept of interactivity between hierarchies let's introduce the terminology of abstract objects or types or theories.
    [...]
    Heterarchies are managing distributed hierarchies, therefor we are able to distribute abstract theories as such. This in itself would produce an interesting type of parallelism, architectonic parallelism. But more interesting are the interactions between hierarchies. A very conservative interaction is a one-to-one translation from one abstract theory to another abstract theory, based on morphisms. This form of interaction is basic for a successful realization of DSW applications.
    But the advantage of DSW come into play with the possibility of metamorphosis, that is the change of categories. This capability of DSW enables evolution of the system, discovery and creation of new domains, and marks the distinct difference to other architectures of a Semantic Web. [Bingo!!!]
    [...]
    A simple example
    There is an easy way of producing conflicts in a dialogical system, if e.g. L1 declares A as a simple object and L2 declares simultaneously A as a complex object, that is a structure. Obviously it is possible, in the polycontextural approach, to model this conflict and to resolve it in another logical system, say L3, this without producing a metasystem subordinating L1 and L2.
    [...]
    Furthermore, the conflict has a clear structure, it is a metamorphosis of the terms "simple object" in L1 and "structure" in L2. This metamorphosis is a simple permutation between sorts over two different contextures based on the chiastic structure of the mediation of the systems. But it respects the simultaneous correctness of both points of view in respect of being a "simple object" and being a "structure". In this sense it can be called a symmetrical metamorphosis.
    Today computing is often characterized by its interactivity. But the programming languages have not changed to respond to this situation. They are still, in principle, monologic.
    Ontology and the Semantic Mapping Problem
    Why do we need all these abstract theories of translation and metamorphosis?
    "One important issue in understanding and developing ontologies is the ontology or semantic mapping problem. We say "or semantic problem" because this is an issue that affects everything in information technology that must confront semantic problems - that is, the problem or representing meaning for systems, applications, databases, and document collections. You us always consider mappings between whatever representations of semantics you currently have (for systems, applications, databases, and document collections) and some other representation of semantics (within your own enterprise, within your community, across your market, or the world).
    This semantic problem exists within and without ontologies. That means that it exists within any given semantic representation such as an ontology, and it exists between (without) ontologies. Within an ontology, you will need to focus on a specific context (or view). And without (between) ontologies, you will need to focus on the semantic equivalence between different concepts and relations in two or more distinct ontologies." [...]
    [...] But don't forget, these ontologies are applied, core ontologies, regional, and not general ontologies. They are parts, subsystems, instantiations of the one and only one general ontology, as formulated in GOL. This is an enormous restriction. Because, before we can interact with each other we have to agree to this general and global framework of GOL. But this is not always reasonable at all.
    The mechanism of metamorphosis [Bingo!!!]
    DSW is introducing mappings, morphisms, translations and metamorphosis between first order ontologies, and is not concerned with regional, core ontologies only.
    How does it work? The basic framework is given by the proemial relationship (Günther 1970). [Bingo!!!]
    "The answer is: we have to introduce an operator (not admissible in classic logic) which exchanges form and content. In order to do so we have to distinguish clearly between three basic concepts. We must not confuse
    a relation
    a relationship (the relator)
    the relatum.
    The relata are the entities which are connected by a relationship, the relator, and the total of a relationship and the relata forms a relation. The latter consequently includes both, a relator and the relata.
    However, if we let the relator assume the place of a relatum the exchange is not mutual. The relator may become a relatum, not in the relation for which it formerly established the relationship, but only relative to a relationship of higher order. And vice versa the relatum may become a relator, not within the relation in which it has figured as a relational member or relatum but only relative to relata of lower order.

    We shall call this connection between relator and relatum the 'proemial' relationship, for it 'pre-faces' the symmetrical exchange relation and the ordered relation and forms, as we shall see, their common basis." [...]
    [...]

    Development of a DSW Prototype Business Application
    Increase in effectivity
    This "killer application" will show a significant increase in flexibility, which goes hand in hand with an increase in speed and transparency of semantic information processing.
    Attribtues of a given static or stable, synchronic system [Semi-Bingo!!!]
    flexibility
    speed
    security
    transformation
    Attributes of [a] dynamic evolving system [Bingo!!!]
    The dynamics of the semantic information processing in DSW opens up thew possibility to create new scenarios, invent new forms of interaction between business partners.
    evolution
    metamorphosis
    co-creation
    self-modification
    [...]

    Web Services and Semantic Web, the classical view
    [...]/xml/library/x-ebxml/
    Diagramm 6 Web Service Scenario
    [...]
    Diagramm 7 Semantic Web Services
    [...]
    [...] Service-orientierten Architektur
    [...]

    A DSW business application is a DSW Semantic Web Service
    [...] the evolving heterogeneous complexity of what we call the WWW.
    THE Web Services are not a homogeneous business. They come in different and not homogeneous forms, that is, again, in heterogeneous definitions.
    Heterogeneity itself is not a static term, too. It is a nominator for a flexible, loosely coupled evolving complexity of decentralized systems.
    The Web is not only defined by its abstract specification but also by its use. The meaning of a sentence is not given by a catalog of administered meanings, but by its pragmatic use. And the administration of meaning is one and only one very special use of sentences and their meaning.
    The picture of the situation has to be enlarged from Syntax&Semantics to, at least, Syntax&Semantics&Pragmatics (Hermeneutics).
    Pragmatics or Hermeneutics is introducing different points of view, different irreducible contexts, that is, contextures, different approaches etc.
    Syntax&Semantics&Pragmatics&Mediation
    Mediation (Proemiality, Chiasm) is introducing the interlocking mechanism, the interactivity of all these different contextures
    [...]
    [...] "Also obvious is that by the default the communication between observers can only be of informal nature. Consistent logical systems are only defined within a given context and, in general, cannot be used for knowledge transfer between different ontologies. The consequence is that some means of informal communication, such as natural language or heuristic mediation systems, is inevitable." [...]
    [...]

    What has do be developed to realize DSW?
    Dynamic Semantic Web (DSW) consists in general of two main parts:
    1. poly-Semantics
    2. inter-Semantics or Pragmatics of mediation and navigation
    [...]
    poly-Semantics deals with the decomposition and distribution of different heterogeneous taxonomies, ontologies and their methods.
    inter-Semantics deals with the interlocking mechanisms between the different heterogeneous contextures and their methods.
    poly-Ontologies: Development of polycontextural ontologies
    poly-Logics: Development of polycontextural logics and proof systems

    How to establish a DSW system in a existing company?
    [...]
    What are the Tools?
    Research and commercial tools for creating ontologies
    [...]
    Evolving and self-modifying systems
    Dynamics between Ontologies and contexts
    [...] Semiotics and Category Theory
    Further Extension of the Smartness of objects (data) [...]
    Logically it is a chiasm of Universe and sorts in many-sorted first order logics.
    Heterarchies, in ontologies
    Heterarchies, in logics
    Heterarchies, in proof systems
    Heterarchies, in taxonomies

    Cybernetic Ontology and Web Semantics
    [...]

    Life as Polycontexturality
    [...]
    New ontology, new Logics
    This essay presents some thoughts on an ontology of cybernetics. There is a very simple translation of the term "ontology". It is the theory of What There Is (Quine). But if this is the case, one rightly expects the discipline to represent a set of statements about "everything". This is just another way of saying that ontology provides us with such general and basic concepts that all aspects of Being or Reality are covered. Consequently all scientific disciplines find their guiding principles and operational maxims grounded in ontology and legitimized by it. Ontology decides whether our logical systems are empty plays with symbols or formal descriptions of what "really" is.
    The following investigation arrives at the result that our present (classic) ontology does not cover "everything". It excludes certain phenomena of Being from scientific investigation declaring them to be of irrational or metaphysical nature. The ontologic situation of cybernetics, however, is characterized by the fact that the very aspect of Being that the ontologic tradition excludes from scientific treatment is the thematic core and center of this new discipline. Since it is impossible to deny the existence of novel methods and positive results produced by cybernetic research, we have no choice but to develop a new system of ontology together with a corresponding theory of logic The logical methods that are used faute de mieux in cybernetics belong to the old ontological tradition and are not powerful enough to analyze the fresh aspects of Reality that are beginning to emerge from a theory of automata.
    [...]
    Interactivity, between trans-contextural and transjunctional operators
    [...] These transjunctional and trans-contextural operators are operators in a exact formal sense, not only defined logically inside a contexture but also between contextures. The concept and formal definition of transjunctions had been introduced by Gunther in his famous paper Cybernetic Ontology and Transjunctional Operations (1962) even before he radicalized his position to a transition from multiple-valued ontologies to poly-contexturality. A more general approach of interactivity between contextures was introduced by Günther in "Natürliche Zahl und Dialektik" (1972) but this concept goes back at least to the concept of an inter-ontology as considered in "Natural numbers in Trans-Classic Systems" (1970), "The philosophical theory on which cybernetics may rest in the future may well be called an inter-ontology." [...]
    [...]

    [...]

    Heideggers radical deconstruction of ontology
    self-modifying media
    [...]

    [...]

    The world as a grid of upper-level ontologies
    The significance of Heideggers questioning of classical ontology has a very practical reason for Web Semantics: It opens up the possibility of a multitude of interacting fundamental ontologies, that is of upper-level ontologies. Aristotelian ontology as proposed by the "hierarchy movement" of Web Semantics is blind of its restriction to one and only one contexture.
    [...]

    Ontology and logics of multi-media

    Morphogrammatics of XML

    Ontologies in different fashions
    many-sorted logics

    fibred category systems

    polycontexturality
    Fibres and navigation

    Revival of classic ontology in Web Semantics?
    [...]
    Upper and core ontologies provide the framework to integrate in a meaningful and intersubjective way different views on the same domain, such as those represented by the queries that can be done to an information system.
    [...]

    Flexibility ruled by an upper framework?
    "To do this, we require a comprehensive set of ontologies that are designed in a way that admits the existence of many possible pathways among concepts under a common conceptual framework."
    [...]
    Navigation and negotiation
    [...]
    Kenogrammatics as a common base of different ontologies
    Different ontologies, if not anyway based on a common upper ontology and common first-order logic, have, even if they are incomparably different, irreducible to a common ground, one thing in common, they have, each for itself, a position. [...]
    [...]
    Formal ontology, category theory and kenogrammatics
    Formal upper ontologies are often described in terms of set theory. A more general approach would be to formalize ontologies with the means of category theory. The most basic and abstract distinction in category theory is the distinction between morphisms and objects.
    With this, another introduction of the empty positions, kenograms, of formal upper ontologies can be offered. Two ontologies may be conceptually different in the sense that one ontology is based on its objects, similar to the set theoretic based ontology, and the other one is based on its morphism, like a more processual and dynamic ontology. What are objects in one ontology are morphisms in the other one. This maybe a clue for a translation between both. This translation could be done by, again, a category theory, which is based more on objects or more on morphisms. Obviously, we would establish with this procedure some of the well known infinite regresses of metalanguage constructions.
    [...]
    Kenogrammatic systems are not meta-languages but in some sense proto-inscriptual grammars.
    [...]
    Web Semantics: Science, Services and Agents on the World Wide Web
    This interdisciplinary journal focuses on research at the intersection of three major research areas: semantic web, agent technology and grid computing. We call this interdisciplinary field Web semantics. [...] This is often referred to as the second third generation of the Web.
    Background The data in computers exists in a bewildering variety of mutually incompatible forms and ever more intense efforts are needed to smooth the process of data integration. The most important such efforts lie in database standardization achieved through the construction of benchmark taxonomies into which all the classification systems pertinent to a given domain would need to be translated only once. Benchmark taxonomies can ensure that all databases calibrated in their terms would be automatically compatible with each other.
    'Ontology' is the name given by information scientists to the construction of such benchmark taxonomies. [...]
    Information systems ontology has implications beyond the domain of data integration. Its methods are used for purposes of information retrieval and extraction from large corporations and libraries (for example of medical or scientific literature). These methods are currently being applied to the problems of navigation on the Internet in work on the so-called Semantic Web. They are used as a basis for work on natural language processing and automatic translation, in enterprise integration, and, most significantly, as a means of integrating the results of inquiries in neighboring scientific fields - for example when inquiries in computational chemistry or structural biology need to be cross-calibrated with the results of inquiries at higher (for example medical or epidemiological) levels of granularity [...].
    [...]

    On the General Ontological Foundations of Conceptual Modeling
    [...]

    [...]

    Formal Ontology and First Order Logic, revisted
    [...]

    Contributions to the Axiomatic Foundation of Upper-Level Ontologies
    [...]

    Formal GOL and the nature of Digital Metaphysics
    [...]

    Formal GOL and the Metaphor of Cellular Computation
    [...]
    Nature as a [Cellular Automata Machine (]CAM(]) [Bingo!!! See also the CAM Brain Machine (CBM).]

    Orthogonalizing the Issues
    [...]
    What to do with all that for a theory of semantics for a Semantic Web?
    The Internet is not given, its elements are not entities; the Internet has to be read and its elements have to be interpreted. Interpretation involves freedom to chose a thematization, a perspective of cognition, it involves not only an observer but hermeneutical procedures. Otherwise we understand by the Internet a system of being to be studied and classified by means of ontology in the very sense, also modernized and formalized, by the Aristotle-Leibniz tradition.
    The project Semantic Web is a challenge for a formalized and operative hermeneutics. Set-theoretical and mereological ontology is mapping only an extremely static and one-sided hierarchical aspect of the "living" tissue of the Web.
    A multitude of interacting hierarchies is a question of cognition and volition interpreting the textures of the Web.
    Translations from one language to another are not based on a common natural urlanguage, but on the co-creative interplay between different languages, natural or artificial.
    Ontology in the sense of GOL is "subjectless". It is a theory of being excluding selfreferentiality by definition. Therefore it is a monolitical theory of what is, of objectivity without any freedom of interpretability. Again, this is very useful for subjectless domains, but useless, if not dangerous, in all senses of the word, for worlds including subjects. Today it seems to be quite tricky to find such a subjectless world. Especially if we are forced to ask who is producing this ontology of a subjectless world and even our robots are asking for more "subjectivity". Ontology as "the most general possible theory about the world" is fundamentally incomplete. It is incomplete on a semiotical level, incompleteness of ontology and incompleteness of logic, and an a graphematical (grammatological) level, it is not only kenogrammatically incomplete but blind for its own kenogrammatics.To insist on a realist point of view to build a general ontology in contrast to a conceptualist understanding of ontology allowing some interpretability of the world is a decision which can not be justified easily using scientific and philosophical arguments. At least this decision is not part of the "new" formal ontology. At this point we are confronted with questions of Power and epistemological fundamentalism.

    Dynamic Semantic Web
    Ontologies: Their Glory and the new bottlenecks they create. problems in searching information,

  • problems in extracting information,
  • problems in maintaining information, and
  • problems in generating information. [...]

    ["]The advent of Web services, and the Semantic Web described by domain ontologies, highlight the bottleneck to their growth: ontology mapping, merging, and integration.["]
    Stephen L. Reed and Douglas B. Lenat, Mapping Ontologies into Cyc

    The Dynamic Semantic Web has to deal with the dynamics of the Web.
    The Web is at a first glance at least distributed, dynamic, massive and an open world (Heflin, Hendler) .
    What is the Semantic Web? It is "a vision of the future in which the "web of links" is replaced by a "web of meaning" where the meaning is machine readable.
    To introduce a web of meaning, ontologies appears as the main concepts and tools.
    Therefore, the first job of DSW is to develop a dynamics of ontologies.

    [Simple HyperText Markup Language (HTML) Ontology Extensions (]SHOE[)]: Dynamic ontologies on the Web
    "Dynamic ontologies on the Web" is the title of an approach by the authors of SHOE [(Heflin, Hendler: Dynamic Ontologies on the Web. 31st of July - 2nd of August 2000)].
    [...]
    All these concepts are realized and have their semantics in the framework of Hierarchy ruled by FOL.
    Problems
    Introduction, Navigation, Negotiation and Integration are restricted to hierarchical Unification.

    Polycontextural Dynamics
    DSW can not be realized by restricting it to this kind of ontological dynamics. In contrast to the mono-contextural approach of SHOE, DSW has to be realized in the framework of heterarchy of polycontextural logics and ontologies.
    How can we map ontologies onto Heterarchies?
    A first but useful explication of the concept Heterarchy is given by the [Unified Modeling Language (]UML[)] heterarchiy diagram.

    Heterarchies
    Hierarchies are distributed and mediated by the rules of heterarchy.
    Each hierarchy contains ontologies in the classical sense.

    Proemial relationship
    The mechanism of the interplay between different ontology is realized by the proemial relationship.

    Poly-Semiotics
    [...]

    Short comparition [comparison] of SHOE and DSW
    Is it possible to develop a Semantic Web with its ontology and logics without having to forget and to deny everything we learned from philosophy, linguistics, logics, semiotics, grammatology and AI in the last century?

    Multiple inheritance
    [...]

    Ambiguity and polysemy
    [...]

    Chiastic polycontextural modelling
    [...]
    In accordance with the constructiviste point of view of conceptualizing as a semiotic process, in contrast to the neo-Aristotelian fundamentalist position of GOL, terms, objects, concepts have to be understood as relative to their use (Wittgenstein, Derrida) and not as pre-given entities of the world (universe).

    Architectonic Parallelism of DSW
    Navigation
    Negotiation
    Interactivity
    Complexity
    Reflectionality

    Dynamics in the Semantic Web Context
    [...]

    Dynamic Ontologies (Heflin, Hendler)
    The Web is dynamic.
    The Web is massive.
    The Web is an open world.
    in: Towards The Semantic Web: Knowledge Representation In A Dynamic, Distributed Environment, Heflin 2001
    [Knowledge Representation on the Internet: Achieving Interoperability in a Dynamic, Distributed Environment, Heflin, 31st of July - 2nd of August 2000]
    Ontology Mapping and Translation
    ["]Will the inevitable proliferation of ontologies really solve the semantic interoperability problem? The answer is clearly no. The widespread adoption of ontologies only gets us half-way to semantic interoperability nirvana by forcing the use of explicit semantics. The other major challenge is mapping from one agent's ontology to another agent's ontology. The approaches to solve this problem range from static manually created ontology mappings to dynamic ondemand agent-based negotiation of ontology mappings.["]
    in: Hendler, Semantic Web Technologies for Aerospace
    [...]

    Water: Static and Dynamic Semantics of the Web
    [...]
    Less concern has been given to dynamic semantics of the Web, which is equally important. Dynamic semantics have to do with the creation of content, actions which may be guided by

  • User-initiated interface actions
  • Time
  • Users' personal profiles
  • Data on a server

    and other conditions

    Cultural dynamic Web
    [...]

    Dynamic Semantic Web
    In contrast to the precedent approaches the PCL based contribution to a Semantic Web and its dynamics is not accepting the limitations of expression, computation and interactivity forced by logic and its logical systems.
    Peter Wegner has clearly analyzed the reason of the failure of the Japanese 5th Generation project: its believe in logics and its logic based programming languages, like Prolog. We have not to accept all the thesis about the change of paradigm in computer science proposed by Wegner, but I agree fully with his analysis of the role of logic. But again, Wegner and his school is not able to think about changing logics, instead he proposes some more empirical concepts to develop his intuition of paradigm change based on interactivity.
    [...]

    Dynamics with Modularity
    [...] Ontologies: an Approach to Resolve Semantic Heterogeneity in Databases
    Figure 5. Global schema generation based on a common ontology produced by integration of domain ontologies
    This ontology dynamics is based on a constructivite epistemology not naively presuposing data systems. Different communities with different ontologies are introduced.
    This Global schema of ontology integration is not telling us what happens with the presupposition of the difference of the ontologies p and q, namely their different Community P and Q.
    [...]
    Obviously, for this scheme of Degrees of Similarities of Ontologies, everything [...] abaut [about] classic semiotics is true in an even more strict sense for ontologies.
    Semiotics and Ontologies
    ["]Semiotics, as the general theory of signs, would seem a natural place to seek a general [Human-Computer-Interface (]HCI[)] framework. However
    (1) semiotics has not developed in a precise mathematical style, and hence does not lend itself well to engineering applications;
    (2) it has mostly considered single signs or systems of signs (e.g., a novel, or a film), but not representations of signs from one system by signs from another, as is needed for studying interfaces;
    (3) it has not addressed dynamic signs, such as arise in user interaction; and
    (4) it has not paid much attention to social issues such as arise in cooperative work.
    [... "]
    Joseph Goguen, Algebraic Semiotics and User Interface Design, 2000

    Dynamics in Ontologies and Polysemy
    Dynamic Ontologies in SHOE
    SHOE is a well established approach to the Semantic Web emphasizing dynamics of ontologies.
    [...]
    Ontology Dependencies (SHOE)
    Below is a tree showing the dependency of ordering [...] of each ontology [(list points added)].

  • Base Ontology [...]
    • Dublin Core Ontology [...]
    • General Ontology [...]
      • Beer Ontology [...]
      • Commerce Ontology [...]
      • Document Ontology [...]
        • University Ontology [...]
          • Computer Science Department Ontology [...]
      • Personal Ontology [...]
    • Measurement Ontology [...]
      • Commerce Ontology [...]
    • TSE Ontology [...]

    Dissemination of Ontologies, a more formal description
    Polycontextural logics enable to add a new operation to extend ontologies. The horizontal operation of mediation MED is used to add ontological Modules not vertically like the USE operation but horizontally and therefore is producing a heterarchic organisation of the ontological modules.
    [...]
    Mediated ontologies are opening up the possibility for metamorphic changes of the basic categories of the ontologies involved in the interaction. [...]
    [...]
    [... N]ot the content but the very structure of the whole ontology is under question. If the modules of whatever content are added vertically, we stay in the westerncentred paradigm of thinking. If we allow horizontal organization of the ontologies we are leaving this empire of hierarchical power to a heterarchical world of chiastic interplay of world views.

    Computational complexity of hierarchy and heterarch [Bingo!!!]
    [...]

    Polysemy: Ontology Extension with the procedure rename
    An interesting case of combining ontology modules together arise if the ontologies contains equal terms. In contrast to simple multiple inherence the situation of polysemy is introduced.
    [...]
    The main principle of ontology is demanding for disambiguating the polysemy of the used term. The simplest and historically oldest method to do this is given by renaming the terms. This is working perfectly in a very small world. But as we have learned, not only the weather system is massive, complex, open worlded, but also our WWW.
    It is probably not very difficult to find, even if restrict ourselves to the english language, hundreds of different meanings of a term [...]. Therefore the renaming procedure can easily explode to a massive and complex topic in itself, destroying the aim of the simple and innocent procedure of renaming.
    [...]
    Extension of ontologies by renaming is not violating the principle of verticality, that is hierarchy. Therefore, the tree is growing and with it its computational complexity.
    It becomes obvious that the procedure of renaming is part of the broader activity of negotiation. Without a proper mechanism of solving the problems of renaming the amount of not machine-assisted negotiation is growing in a contra-productive way, conflicting the very aims of the Semantic Web to support machine-readable semantic information processing.
    [...]
    20 hits in 186 ontology files
    Douglas B. Lenat [Cyc]
    The success of the Semantic Web hinges on solving two key problems:
    (1) enabling novice users to create semantic markup easily, and
    (2) developing tools that can harvest the semantically rich but ontologically inconsistent web that will result.
    To solve the first problem, it is important that any novice be able to author a web page effortlessly, with full semantic markup, using any ontology he understands. The Semantic Web must allow novices to construct their own individual or specialized-local ontologies, without imposing the need for them to learn about or integrate with an overarching, globally consistent, master ontology
    The resulting Web will be rich in semantics, but poor in ontological consistency. Once endusers [end users] are empowered by the Semantic Web to create their own ontologies, there will be an urgent need to interrelate those ontologies in a useful way. The key to harvesting this new semantic information will be the creation of the Semantic Web-aware agents that can cope with a diversity of meanings and inconsistencies across local ontologies. These agents will need the capability to interpret, understand, elaborate, and translate among the many heterogeneous local ontologies that will populate the the Semantic Web.
    These agents will not only "need the capability to interpret, understand, elaborate, and translate ..." but they also have to be non-human agents, that is programs. What's difficult to master for human beings should be a fine job for our new agents. It seems that the unsolved problems of AI are emerging again in a new setting.

    Polycontextural modelling of polysemy
    The Internet is a giant semiotic system. Sowa [(see the Semantic Network (SN) knowledge representation)]
    Polycontextural modelling can be made more transparent if we don't forget that the concept of ontology is only a very reduced case of general semiotics. (I leave it for further reflections to abandon also semiotics in favor of polycontexturality.)
    Exposing a polycontextural modelling of polysemy I am forced to use semiotic distinctions not available in the Semantic Web language SHOE.

    Semiotic Diagram
    Remember Charles Sanders Peirce:
    [...]
    Sowa:
    Many of the ontologies for web objects ignore physical objects, processes, people, and their intentions.
    [...]
    Diagramm 22
    World [map or globe] → Observation
    Model [graph] → Simulation
    Theory [logic formula] → Deduction
    Pure logic is ontologically neutral
    ["]It makes no presuppositions about what exists or may exist in any domain or any language for talking about the domain. To represent knowledge about a specific domain, it must be supplemented with an ontology that defines the categories of things in that domain and the terms that people use to talk about them. The ontology defines the words of a natural language, the predicates of predicate calculus, the concept and relation types of conceptual graphs, the classes of an object-oriented language, or the tables and fields of a relational database.["] Sowa
    Diagramm 23
    Documents → {Controlled Languages} → Logic [(]SQL[,] Java[,] XML[,] Graphics[,] VHDL[)] → {Controlled Languages} → Documents
    Everyone who has studied polycontextural logics know that logic isn't as neutral as it is believed by the community of logicians and computer scientists. At least, logic is presupposing a special type of formality to be accessible to formalization, and this formality as such can turn out as logics restricting content. But it is crucial to understand this neutrality statement because it describes exactly the situation as it is established in contemporary (western) thinking.
    Ask for other opinions and paradigms Charles S. Peirce or Gotthard Günther.

    Reflectional semiotic modelling of polysemy
    A reflectional analysis of polysemy is an analysis of the semiotic actions or behaviors of agents which is leading to the phenomenon of polysemy and its possible conflicts with other semiotic or logical principles. [...]
    Mono-contextural introduction of "isa":
    S1: Chair is part of a furniture ontology
    S2: Chair is part of a department ontology
    S3: Chair is part of a vocabulary
    Poly-contexturally we have to distinguish the situations "isa as":
    O1S1: Chair as such, that is, as an object "Chair"
    O2S2: Chair as such, that is, as a person "Chair".
    O3S3: Chair as such, that is, as the token "Chair"
    Here, "as such" means, that the ontologies Person, Object and Vocabulary can be studied and developed for their own, independent of their interactivity to each other but mediated in the constellation of their poly-contexturality, that is, their distribution over 3 loci.
    [...]
    Query's contradiction
    [...]
    Extension by mediation
    [...]

    Some Polylogical Modelling of Polysemy
    [...]
    [... I]nferencing in poly-contextural systems is architectonically parallel.
    [...]

    Inconsistency, Contradiction and Polysemy
    Building, Sharing, and Merging Ontologies, John F. Sowa
    [...]
    From merging to mediating interactivity
    From an actional point of view in contrast to an entity ontology standpoint it is more apprpriate to consider the process of merging as a process of conflict resolution. This type of modelling is reasonable only if we accept the relevance of the two different point of views, if both positions have their own reason to exist. Otherwise it would only be a question of terminology and adjustments (renaming, relabelling).
    [...]

    Polycontextural modelling of multiple inheritance
    Ontologies differ in how they handle the case of inheriting multiple properties.
    [...]
    It is simply bad propaganda and contra-productive advise if I have to read in different Web Semantic papers that they have solved the multiple inheritance problem properly.
    [...]

    Query, questions and decisions
    [...]
    As long as our queries are answering our questions with only non-ambiguous, nonpolysemous statements, we are dealing with a very reduced case of semantics. It is semantics reduced to a machine-readable and machine-understandable situation, therefore there is no need for cognitive reflectional decisions
    [...]
    Semantics as a reflectional system is not dealing primarily with facts but with meanings. Meanings are at least reflectional multi-leveled, or as we know from Second-order Cybernetics, second-order concepts. That is concepts of concepts (of facts).
    [...]

    From Metapattern to Onto[clone]
    Parallelism in Polycontextural Logic
    Additionally to the well known OR- and AND-parallelism, polylogical systems offer two main extensions to the logical modeling and implementation of parallelism. First the distribution of the classical situation over several contextures and second, the transcontextural distributions ruled by the different transjunctional operators. The distribution over several contextures corresponds to a concurrent parallelism where the different processes are independent but structured by the grid of distribution. The transcontextural parallelism corresponds to a parallelism with logical interactions between different contextures.
    [...]
    Prolog is based not only on its logic, used as an inference machine, but also on its semantics or ontology, realized as a data base. Therefore the process of parallelising has to deal with a deconstructive dis-weaving of the data base's ontology.

    Strategies towards a polycontextural parallelism in Prolog
    Like in the case above, where the number systems had to be cloned, in the Prolog case, the data base has to be decomposed into disjunct parts. These separated conceptual parts, or conceptual subsystems, have to be distributed over different contextures in a mediated polycontexturality.
    Additionally the Prolog parallelism which is based on OR- and AND-parallelism has to be mapped into distributed logics, that is, into a polylogical system.
    The Prolog example allows to explain in more a plausible way the decomposition or cloning of the common universe of discourse, that is, the data base of facts, into different subsystems. And secondly it is easier to introduce parallelism based on polycontextural logic than on arithmetics and combinatory logics. Polycontextural logic is not widely known but more accessible than combinatory poly-logic and poly-arithmetics [...]. Additionally there exists since 1992 a working implementation of a tablex proof system of an interesting subsystem of polycontectural logics in ML, running on Unix systems [...].

    An intermediate step with Metapattern
    As an intermediate step in the shift of conceptualization from a hierarchical to a heterarchical way of concept building it maybe helpful to use the strategy of metapattern [...] (Wisse). Metapatterns are used as an new modeling strategy for complex informational systems. Metapatterns are not involved in changing the basic assumptions of programming languages or even their logic as with the PCL approach.
    Metapatterns could be helpful to move the process of parallelisation from the OR- and AND-level, that is, from the logical level to the deeper level of the data base, with its facts and rules, shared by the classical parallelism.
    She can relax on a fixed object orientation because - the metapattern determines that - situation and object are relative concepts [...]. A particular situation is also object in another, higher-level situation. Likewise, an object can act as situation in which another, lower-level object resides. Situation, then, is a recursive function of object and relationship.
    [...]
    Hierarchy or chiasm?
    It is this concept of situation that characteristically sets the metapattern apart from traditional object orientation (and provides it with advantages over OO; [...]). Compared to an object that (only) exists absolutely, an object believed to exist in a multitude a different situations can unambiguously be modeled - to be equiped - with corresponding behavioral multiplicity. [...]
    The radical conclusion from the orientation at situational behavior is that an object's identification is behaviorally meaningless. The modeler does not have to explicitly include something like an original signature in all her models. Essentially a privileged situation may implied. It serves the only purpose of guaranteeing sameness or, its equivalent, persistent identity across (other) situations. Being a situation in its own right, when included in a model it is represented by a seperate context. Made explicit or not, its role is to authenticate an object's identity in other situations by establishing the signature in other contexts.
    Identity as a network of nodes
    Traditional object orientation assigns identity at the level of overall objects. Context orientation replaces this view of singular objects with that of plusrality within the object; the object always nneds a context to uniquely identify the relevant part of an overall object, which is what identifying nodes regulate. When behaviors are identical, no distinction between contexts is necessary.

    Deconstruction of a typical PROLOG example
    [...]
    Instead of linearizing the above separated contextures [...] into one universal domain [...] the polycontextural modeling is asking for an interweaving and mediating of these different contextures together to a complex poly-contexturality.
    [...]
    To decompose the basic classical ontology into different disjunct domains is a well known procedure and should not be confused with the decomposition, or de-sedimentation of an ontology in the PCL case. In PCL the domains are not simply disjunct and embraced by the general ontology but interwoven in a complex mechanism of interactions.

    Polylogical modeling of the metapattern
    [...]

    Prolog's ontology
    [...]
    In the framework of PCL mechanism are offered for a great flexibility in interlocking and interweaving different points of view, situations, and modeling.
    The decomposition of an universal domain into its different components is not only introducing a conceptual advantage for the process of modeling but also on a computational level a new form of parallelism is introduced.
    The whole manoeuvre is quite similar to what I proposed as a proemial relation between sorts and universes in many-sorted first order logics.

    The devil is in the detail
    [...]
    [... W]e have two options, the mono- and the polycontextural. The advantage of the later one is flexibility, the advantage of the first one is stability. Both have there weakness, flexibility is risky and dangerous, stability is restricting and killing.

    Ontological transitions
    From Types to behaviors
    Identity as a network of nodes
    Traditional object orientation assigns identity at the level of overall objects. Context orientation replaces this view of singular objects with that of plurality within the object; the object always neds a context to uniquely identify the relevant part of an overall object, which is what identifying nodes regulate. When behaviors are identical, no distinction between contexts is necessary.
    [...]
    The class hierarchy of the OO model is transformed to a heterarchical model of behaviors, that is simultaneously ruling contexts.

    From behaviors to interactivity
    Behaviors, realized as in situations and contexts comes in plurality.
    But metapattern doesn't offer much mechanism of navigation between simultaneous contexts. What we get is the notion of a pointer, "pointer information objects". They are supporting navigation from one context to another. But these pointers don't give a hint how they could be implemented.
    Metapattern points to the relevance of points of view
    [...]

    From objects to objectionality

    The hidden rules: logic and interferencing
    In contrast to the modelling aspect emphasized by the metapattern approach, from the point of view of implementation of the conceptual models we have to consider the underlying logics of the informational system, here ontologies for the Semantic web.
    With this turn we are enabled to show the overwhelming advantage of the PCL approach over the classical modelling and implementing standards. It is the polycontectural, that is the polylogical apparatus which is framing the implementation of the deconstucted ontologies with the help of the metapattern. Without a polylogical implementation, the metapattern is an important modelling device but gives no guidelines for its real world implementation. This can by realized by polylogical funded data base logics

    From Information to Knowledge
    [...]
    [...] What has to be mentioned is that in their different approaches they all introduced some two-level languages of object-level and meta-level theories.
    To give a further motivation to introduce a poly-contextural view of data-base systems it maybe helpful to use the difference between logic of data and logic of knowledge.
    [...]
    Polycontexturality, like the metapattern approach, takes a different strategy. Objects are objects only in relationship to contexts. More adequate, objects are understood by their behavior. Therefore, an abstract object without any behavior, independent of contexts doesn't exist; it is a nil object.
    Therefore, classical objects, like data, have a one-level behavior, they exist by being named. They are the result of the process of naming.
    Semiotically we are making a shift from the dualistic to a trichotomic semiotics, and further to a chiastic graphematics.
    What are the objects of the Semantic Web?
    While formalizing the principles governing physical objects or events is (quite) straightforward, intuition comes to odds when an ontology needs to be extended with non-physical objects, such as social institutions, organizations, plans, regulations, narratives, mental contents, schedules, parameters, diagnoses, etc. In fact, important fields of investigation have negated an ontological primitiveness to non-physical objects [...], because they are taken to have meaning only in combination with some other entity, i.e. their intended meaning results from a statement. For example, a norm, a plan, or a social role are to be represented as a (set of) statement(s), not as concepts. This position is documented by the almost exclusive attention dedicated by many important theoretical frameworks (BDI agent model, theory of trust, situation calculus, formal context analysis), to states of affairs, facts, beliefs, viewpoints, contexts, whose logical representation is set at the level of theories or models, not atthe level of concepts or relations[."]
    Sowa

    Interactions in a meanigful [meaningful] world
    Queries, question-answering systems
    [...]

    [...]

    On Deconstructing the Hype
    [...]

    The hype of the distributed, decentralized and open Web
    [...]
    [...] There are no surprises at all if we discover that the structure of the Web is strictly centralized, hierarchic, non-distributed and totally based on the principle of identity of all its basic concepts. The functioning of the Web is defined by its strict dependence on a "centralized authority".
    If we ask about the conditions of the functioning of the Web we are quickly aimed at its reality in the well known arsenal of identity, trees, centrality and hierarchy.
    Why? Because the definition of the Web is entirely based on its identification numbers. Without our URIs, DNSs etc. nothing at all is working. And what else are our URIs then centralized, identified, hierarchically organized numbers administrated by a central authority?
    Again, all this is governed by the principle of identity.
    "We should stress that the resources in RDF must be identified by resource IDs, which are URIs with optional anchor ID." [...]
    What is emerging behind the big hype is a new and still hidden demand for a more radical centralized control of the Web than its control by URIs. The control of the use, that is of the content of the Web. Not on its ideological level, this is anyway done by the governments, but structurally as a control over the possibilities of the use of all these different taxonomies, ontologies and logics. And all that in the name of diversity and decentralization.
    All the fuss about the freedom of the (Semantic) Web boils down to at least two strictly centralized organizational and definitorial conditions: URI and GOL.
    [...]
    In order to achieve its potential, the Semantic Web must provide a common interchange language bridging these diverse systems.
    [...]
    Nevertheless, it is important not to confuse the fundamental difference of deep-structure and surface-structure of the Semantic Web. This fundamental difference of deep/surface-structure is used in polycontextural logic not as a metaphysical but as on operational distinction. And all the Semantic Web "cakes" are confirming it.
    [...]
    Beyond the layer of Unicode and URI we have to add their arithmetical and code theoretical layers. The Semantic Web Cake is accepting the role of logic, down to its propositional logic, but is not mentioning arithmetics. As we have seen in Derrida's Machines, arithmetics and its natural numbers are pre-given and natural. There is not much to add. There are many possible open questions with Unicode and URI, but not with its common arithmetics.
    [...]

    Conflicts between diversity and centralization of ontologies
    Our media philosophers are still fantasizing about the virtuality of the Web and the new Global Brain and bodiless decentralized sex, but there is no worry, the authority of the URI is controlling the game from the very beginning. And now we are going a step further, still not remarked by the critical media studies, and have to deal with a much more sophisticated attempt to the centralization and control of the Web by the GOL. Without a General Ontology Language there is no Semantic Web at all. GOL maybe made explicit or may remain in the background, as a new cyber-unconsciousness like the URIs, but it is ruling together with the Unicode and URIs the whole game.
    [...]
    Our global village is dealing with the same, and simple problems, of the old Greek marketplace of discussions, all waiting for a great generalist, Aristotle, to make an end of the semantic chaos by introducing his GOL and Logic.
    [...]
    It will turn out that the general theory is not so much an ontology GOL but a theory of translating and mediating different ontologies, first order as well second-order ontologies. A Dynamic Semantic Web would add to the translations some mechanisms of transformation and metamorphosis.
    Its main candidate is well known too: category theory, the ultimate theory of translation.

    Trees, Hierarchies and Homogeneity The general language of the Semantic Web is XML. But what is XML? Short: a tree. The same is true for the other languages like RDF.
    As developed in Derrida's Machines the main structure of formal thinking is natural. Everything has an origin and is embedded in a tree. Natural deduction systems, natural number systems and also the limits of this paradigm of thinking is natural. And this is also the way the Semantic Web is organized. XML is a tree. The tree is natural and universal.
    Again.

    Structuration: Dynamics and Structures
    [...]
    It is all about dynamics and structures. This brings us back to the central topics of Derrida's Machines: Interactivity between structures and dynamics, that is, to the interplay of algebras and co-algebras, ruled by category theory and surpassed by the diamond strategies leading to polycontexturality and kenogrammatics.
    We arrive back to terms like translation, metamorphosis, polycontexturality, kenogrammatics, algebra and co-algebra, swinging types of algebras and co-algebras, etc.
    A new effort has to be undertaken to collect the concepts, problems and methods of the Semantic Web into a more general and formal framework.
    Not surprisingly, the main topic of the Semantic Web is translation, in other words a "interchange language". Translation of taxonomies, ontologies and logics. Translation as interaction, merging and transforming different domains, points of view, contexts. The most general approach to translation is given by the methods of category theory and semiotic morphisms [...] not yet applied by the Semantic Web community. [...]
    It seems to be obvious, that the languages of translation, mediation and metamorphosis are not languages of a general ontology as containing the "most general, domain-independent categories of reality" but languages which are neutral to ontologies, describing what happens between ontologies. There purpose is not intra-ontological but inter-ontological, mediating ontologies and not functioning themselves as ontologies.
    Dynamics is not only covered by conservative interchange but interwoven in permanent transformations ruled by the play of metamorphosis. Metamorphosis can be understood as an unrestricted interplay of categories disseminated in a polycontextural framework. Metamorphosis is not only preserving but subverting meanings in the process of interactivity. Translation is interchange, metamorphosis is creation of new meanings.
    The behavior of the Semantic Web is best modelled in terms of an interplay of algebras and co-algebras in the general framework of category theory. But this is as I have shown enough only a very first step in modeling the interactivity of autonomous systems. This means, that I reject the idea of modeling the structural dynamics/dynamical structure by category theoretical morphisms only.
    Interactivity comes with reflectionality, architectonics and positionality. These topics have to enter the game to design a more dynamic Semantic Web as it is considered by the very simple and conservative procedures of merging and integrating ontologies and creating contextual concept spaces.

    Problems with semantics?
    [...]

    Problems with inferencing?

    [..]"

    Comment

    [see also for example the documents titled "Ontology-based Web Agents" (software agents or softbots) publicated in 1997 and "Towards the Semantic Web: Knowledge Representation in a Dynamic, Distributed Environment" publicated in 2001, and also the quoted and commented works "..." and "..." below

    See also Sloman's Meta-Morphogenesis in relation to his CogAff architecture and our Evoos.

    In addition to the problems with reflection, exactly the problem of all those

  • ontologies utilized as knowledge base schemata, and
  • database schemata based on for example the Entity Relationship Model (ERM) and the Structured Entity Relationship Model (SERM)), and also
  • Distributed Systems (DSs) with their nodes or peers,
  • Distributed operating systems (Doss), and
  • Distributed File Systems (DFS),
  • Distributed DataBase Management Systems (DDBMS),

    as well as/which is their

  • inheritent subjectivity

    led us to look at non-classical logics beyond fuzzy logic, and cybernetical logics, as proven with the now famous sentences of chapter 8 Lösungsansatz==Solution Approach of The Proposal: "Bei dem Versuch der Aufstellung eines logischen Modells erweist sich die Forderung nach der Reflektivität als problematisch. Deshalb weichen mehrere Forschungsprojekte auf unterschiedliche logische Systeme aus, die zum Teil die klassichen logischen Ansichten erweitern (Stichwort Nicht-klassische Logik) oder aber nicht mehr beachten (Stichwort Kybernetische Logik).==In the attempt to establish a logical model, the requirement of reflectivity proves to be problematic. Therefore, several research projects deviate to different logical systems, which partly extend the classical logical views (keyword non-classical logic) or do not consider them anymore (keyword cybernetic logic)."
    For sure, the latter includes the holologic and the PolyContextural Logic (PCL).

    tabular organization, we took Binary-Relational Model (BRM) and graph as organisation, because of the SN, CG, RDF graph, and also NN, ANN, triple store, tuple space, triple space, etc..
    graph applied on graph (see for example the quote and the comment in relation to Arrow System), not tree applied on tree and not tree applied on tableaux or matrix
    We also took hypergraph due to mathematical set.

    So many talk, so few solutions, and no decisions when we read PCL, semiotic, kenogramatics, and so on we directly saw the same as with pro and contra Descarte. They are running in circles and they are only using more words than required explaining something by using words, which can be covered by logics, and adding more words increasing complexity, because the problems are talked to pieces in a highly intellectual way. Eventually, they do not get to a final conclusion and action.

    Another point at this time was the question of system understandability. If there is no formalism, then where comes the meaning and language for humans from?
    Another point at this time was the question of system trustworthiness. If ther is no understandability, then where comes the trust for humans from?

    Strictly speaking, Evoos and OS are interaction systems and have all the properties discussed in this and related papers.

    The example of mono-contextural introduction of "isa" and poly-contextural "isa as (such)" shows the contradiction between the

  • demand of participate or get involved, and subjectivity respectively polycontexturality in Gotthard Günther's mathematics in relation to feeling, expression, language, opinion, common sense, etc., and pure rationality and exact ontology. We saw directly that there is no solution and therefore no possibility for the Semantic Web ever become a reality as envisioned and designed.
    "Subjectivity is Truth", Søren Kierkegaard, but eventually required are a
  • simple approach, mechanism, functionality, etc., which allows to get the advantages and just works for an intelligent, cognitive agent system,
  • purely rational core, on which all other knowledge representations are based,
  • central metadata repository,

    as the foundation. Obviously, only our SOPR in the Ontoverse (Ov) is able to provide this.

    other problems comprise

  • badly constructed ontologies and
  • which cannot be solved with PCL or whatsoever.

    from polygonal and poly-contextural to ontogonal and onto-contextural

    Prolog parallelism mapped into distributed logics, that is, into a polylogical system. As in the case of Agent-Based System (ABS) we also noted a lack of knowledge about operating system in the field of cybernetics. Distributed Holonic Agent System (DHAS) with its basic properties for or capabilities of communication, negotiation, and interaction, as well as reflection already does the job of subjectivity respectively polycontexturality. We even have HAS with ontology as identifying structures and holologic. We have subgraphs or hypersets, semantic subnetworks or subgraphs, or subontologies as identifying structures, and agent-orientation in addition to object orientation with the agents in their individual situations.
    The first simple implication is that polycontexturality is simply inherent to the Evoos Architecture, or said in other words, it is polylogical system and even an ontological system as side effect of its design.
    The second simple implication is that the so-called Dynamic Semantic Web is simply inherent to the Evoos Architecture. And this is what the plagiarists and more criminal entities have understood only in 2004 and we understood at the same time how powerful our Evoos already is.

    To some extent Poplog and Sim Agent are already doing functionality of ARS for example, but at this point one can see that an overall concise system architecture and formalization is missing. Once again we wanted a minimalistic solution and as a matter of well-thoughed through design our minimalistic graph-based solution already solves this requirement as well.
    Everything simply came together, everything.
    We also concluded that a three-valued logic is sufficient, as we also explained with fuzzy logic, with or as part of a graph-based system (see for example the book "Visuelle Programming" publicated in 1998, specifically the chapters Regelorientierte VP-Systeme==Rule-oriented VP Systmes (e.g. Progres) and Multiparadigmenorientierte VP-Syteme==Multiparadigms-oriented VP Systems).

    We came from comics and Visual Programming (VP), and programming with graphs was one of many possiblilities (see for example the book "Visuelle Programming" publicated in 1998, specifically the chapters Regelorientierte VP-Systeme==Rule-oriented VP Systmes (e.g. Progres) and Multiparadigmenorientierte VP-Syteme==Multiparadigms-oriented VP Systems), like for exampleProgres.

    And then we find the companies SAP and Ontoprise, which was established to block and damage C.S. and our corporation proven by several facts, which we will not disclose here, but was already dissolved and sold in 2012.
    Obviously, Ontoprise wanted to steal our Evoos through PCL, which it found by espionage, because the quoted paper titled "Intro PCL" was saved at least since the 21st of January 2000.

    Comment 2
    Obviously, DSW has several basic properties of the Arrow System and our Evoos described in The Proposal and therefore is based on them besides PolyContextural Logics (PCL). {more quotes will provide more evidences for our claim}
    Furthermore, it shows that these foundational systems and DSW provide the foundational concept of Linked Data (LD) and also the difference between the static Semantic WWW and the SWWW with Linked Data, and the equivalence with the foundations of our Evoos.

    dynamics of ontology

    But the result of DSW, Common Sense Computing (CSC), Linked Data (LD) due to multiple opinion or views, and interpretations vs. exact ontology, shows that computational ontology is confused with ontology proper. But a core is required, a web of facts. LD is not such a web of fact, doubtlessly. same chaos as with ontology, crypto, and so on, and we only clean up our destroyed, messed up properties, and provide said ontological core under the full control of our SOPR:

    See also once again the Investigations::Multimedia, AI and Knowledge management Linked Open Data (LOD) Special and Investigations::AI and Knowledge management of the 31st of December 2013.
    The webpage about KG of the company Sirma Group→Ontotext has been quoted above or below to further document its illegal activities.
    [to be continued]

    We quote a document, which is about mutli-paradigmatic programming and was publicated in June 2005: "Derrida's Machines Part III
    ConTeXtures [] Programming Dynamic Complexity
    An Application: Fibonacci in ConTeXtures
    [...]

    Fibonacci in ConTeXtures
    Genereller Rahmen der ConTeXtures
    Wie sind die ConTeXturen einzuordnen?
    1. Frage: Einzelsysteme
    [...]
    2. Frage: Kommunikation
    [...]
    3. Frage: Interaktion und Reflexion
    Was würden wir gewinnen, wenn diese autonomen Rechensysteme nicht nur miteinander kommunizieren, sondern auch miteinander interagieren und auf ihre Interaktionen reflektieren könnten?
    Antwort: Wir würden eine Komplexion erhalten, die sich nicht mehr auf ein einziges und einzelnes System reduzieren liesse, weil ein einzelnes und einheitliches System nicht mit sich selbst interagieren kann. Interaktion, und auch Reflexion, setzen Verschiedenheit der Systeme voraus. Dies würde echte Kooperation zwischen Systemen ermöglichen, die eine höhere Komplexität aufweist als ein noch so grosses Einzelsystem. Ko-kreation von gemeinsamen Umgebungen wären möglich. Für kommunikative Systeme gilt dies nicht, da sich deren Komplexität (im Prinzip) auf einen Informationsaustausch zwischen Teilsytemen eines einzelnen Gesamtsystems ohne Umgebung reduzieren lässt.
    4. Vorschlag: ConTeXtures
    ConTeXtures stellt das erste Programmierungsparadigma dar, das den Anforderungen einer interaktionalen und reflektionalen Komplexion von Rechnern und Programmen gerecht wird. Interaktivität und Reflektionalität von Computersystemen sind durch ConTeXtures konzeptionell erfassbar und werden durch diese programmiert.
    Wie ist das möglich?
    Weil ConTeXtures auf der Grundlage der polykontexturalen Logik konzipiert sind. ConTeXtures als Programmiersystem, besteht aus einer Distribution und Vermittlung der proto-typischen Programmiersprache ARS (Loczewski), die wiederum auf dem Lambda Kalkül aufbaut. Der Lambda Kalkül ist anerkanntermassen das Grundmodell jeglicher Programmierung überhaupt und ist äquivalent dem mehr Maschinen orientierten Modell der Turing Maschine.
    ARS (Abstraktion, Referenz, Synthese) wurde von Lozcewski zwar als proto-typisches System eingeführt, ist aber von ihm auch für verschiedene Programmiersprachen, wie C, C++, [LISP] Scheme, Python zur Anschlussfähigkeit für Real-World-Programming weiter entwickelt worden.
    ConTeXtures verstehen sich in der Tendenz dessen, was heute "Interactional revolution in computer science" (Milner, Wegner) genannt wird.

    Was ist neu zu Derrida's Machines?
    ConTeXturen sind das erste Paradigma und System der Programmierung auf dessen Basis die konzeptionellen Entwürfe von Derrida's Machines realisiert werden können. Es bietet zudem den programmtechnischen Anschluss zu bestehenden mono-kontexturalen Programmiersprachen, vermittelt durch die Anschlüsse von ARS, und damit eine Methode zu deren Distribution und Vermittlung.
    ConTeXturen erlauben es, den disseminantiven Schnitt programmtechnisch zu realisieren. Der disseminative Schnitt ist die massgebliche Strategie, bestehende Programmiermethoden, -Konzepte und -Lösungen einer Heterarchisierung und damit einer Implementierung durch ConTeXtures zugänglich zu machen.
    ConTeXtures zeigen jedoch ihre Stärke in einem Feld, das vorwiegend durch ambigue, polyseme und konfliktuöse Konstellationen gekennzeichnet ist und das von klassischen Paradigmen der Programmierung explizit ausgeschlossen ist. Programme müssen per definitionem disambiguiert werden. Lebensweltliche Verhältnisse, Sprache und Bedeutung, Interaktion und Reflexion, usw. sind als solche schlechtweg nicht disambiguierbar. Marvin Minskys Vorstoss zu einer Problemlösungsstrategie der "multiple ways of thinking", p-Analogy, als Prototyp einer neuen Denkweise im Entwurf von intelligenten Systemen, wird durch den Einsatz der ConTeXtures einer Realisierung näher gebracht.
    Konzeptionell wurde dieser Sachverhalt in aller Ausführlichkeit in Derrida's Machines ausgeführt, ConTeXtures entwerfen zum ersten Mal ein dazu passendes Paradigma der Programmierung.

    Eine Metapher: Intelligentes Druckersystem
    Beispiel eines autonomen reflektionalen und interaktionalen EchtzeitSystems: ein MIRC-Druckersystem. (MIRC: mobile, interactional, reflectional, computation). Siehe auch: Autonomic computing, IBM.
    Ein Drucker in einem komplexen Verbund von vernetzten Computern hat eine Fülle von Jobs zu erledigen. [...]
    Ein MIRC-Drucker ist ein lernfähiges System, das die Verhaltenspattern des Verbundes reflektiert, also seine Geschichte kennt und das durch Interaktion mit den Sendern in der Lage ist, Verhandlungen (Negotiations) durchzuführen, mit dem Ziel einer Verbund gemässen Optimierung der Prozeduren und Jobs zu gewährleisten. Und zwar während des Ablaufs der Jobs und nicht davor oder danach durch Anpassung der Prioritätenliste. [...]
    Ein solches Druckersystem ist nun ein voll integriertes jedoch autonomes System des Gesamtverbundes und nicht bloss eine Peripherie.
    Wie ist es möglich, ein solches intelligentes Druckerystem zu realisieren?
    Reflektionalität und Interaktivität. Aufgrund der Separierbarkeit der Unentscheidbarkeit selbst-reflektionaler Systeme in distinkte distribuierte Systeme, wie in ConTeXtures realisiert, ist Reflektionalität und Interaktivität simultan, in Echtzeit, zum Prozessablauf möglich.
    Was sind die Anforderungen an ein Interaktionssystem?
    Algorithmische Systeme sind strukturell geschlossen und haben nur sekundär Zugang zu ihrer Umgebung (Turing Machine). Ihr Hauptziel ist die Optimierung der Berechnung der Algorithmen unter der Voraussetzung ihrer gesicherten Berechenbarkeit [(siehe Kapitel 8.2.3 Prinzip eines Virus und Referenze von TUNES darin)]. Interaktionssysteme dagegen müssen bezüglich ihrer Reaktionsgeschwindigkeit optimiert werden. Je direkter ein System auf seine Umgebung reagieren kann, desto effizienter ist es. Reaktivität ist jedoch nicht identisch mit Rechengeschwindigkeit. Real-time computing ist primär abhängig von der Direktheit der Reaktivität und nicht von der Geschwindigkeit in der ein Algorithmus (zu Ende) berechnet wird, dessen Ausgangs-Situation (Daten, Annahmen) in der - noch so kurzen Zwischenzeit - längst obsolet geworden ist.

    Grundzüge der ConTeXtures
    Als ein System der Dissemination (Distribution und Vermittlung) des Lambda Kalkül-basierten Paradigma der Programmierung ARS (Abstraction, Referenz, Synthese) [von unserem Evoos] mit seinem Konzept der Berechenbarkeit, sind die ConTeXtures durch die zwei grundsätzlich neuen Eigenschaften charakterisiert: Reflektionalität und Interaktionalität [(siehe Kapitel 8.2.3 Prinzip eines Virus und Referenze von TUNES darin)]. Diese können als zwei Dimensionen der Dissemination von Algorithmen betrachtet werden und begründen die polykontexturale Matrix. Damit gehören die ConTeXtures zum dritten Typus der Programmierung.
    Gemäss der Vermitteltheit der Teilsysteme der ConTeXtures ist die Aufbaustruktur durch Ebenen und Features charakterisiert, die in klassischen Systemen nicht existieren. Daraus ergibt sich folgende Architektonik.

    Allgemeine Architektonik
    Architektonik mit Templates und Patterns. Diese geben die Struktur des Systems bzgl. Reflektionalität und Interaktivität an. Configurations und Constellations regeln die verschieden Kombinationen der Topics (Datenstrukturen) und Styles (Programmierungsstile) von ARS Programmiersystemen. [...]
    [..]
    Das später diskutierte Beispiel der Berechnung der Fibonacci Zahlen, bezieht sich auf den Topos Numbers und den Style Functional Programming. Entsprechendes kann für die Objekt-orientierte Programmierung vorgeführt werden, wie dies insb. für ein Dynamic Semantic Web von Wichtigkeit ist (Heterarchisierung von Ontologien und Datenbanken).
    [..]

    [...]

    Proto-typische Applikationen in ConTeXtures
    Was in grosser Allgemeinheit konzeptionell in DERRIDA'S MACHINES und insb. in Dynamic Semantic Web skizziert wurde, lässt sich nun im Sprachrahmen der ConTeXtures schrittweise realisieren. [...]

    [...]

    Basic Concepts of Parallelity in pLISP
    Eine interessante Lösung bietet das pLISP [...]. Es ist zu lokalisieren zwischen den klassischen Ansätzen zur Parallelität und dem hier vorgeschlagenen. Der Unterschied zwischen pLISP und ConTeXtures liegt darin begründet, dass pLISP auf der Basis distribuierter strikter Operatoren mithilfe des neu eingeführten Kombinators P (Proemialkombinator) einen Parallelismus einführt, wohingegen ConTeXtures eine Dissemination auch der nicht-strikten Operatoren (I, K, S) versuchen.
    [..]

    [...]

    Bilanz
    Entsprechend der Unterscheidung von daten-bezogener, algorithmischer (nicht-paralleler und paralleler) und architektonischer Komplexität lassen sich je Realisationstyp verschiedene Optimierungsstrategien und deren Kosten unterscheiden.
    Nachteile der polykontexturalen Modellierung. Die architektonische Parallelverarbeitung hat verglichen mit allen anderen Modellen einzig den Nachteil, dass sie zu ihrer Realisation konzeptionell von vornherein von einer Dissemination sowohl der Programm-Module wie auch der Prozessoren ausgeht, die die bekannten Ansätze übersteigt. Beide Aspekte erweisen sich jedoch bei genauerer Betrachtung als Vorteile.
    Die Programmierung einer Distribution von Modulen ist leichter als das Konstruieren und Pflegen eines grossen und einheitlichen Programms, das zwar auch modular aufgebaut sein kann, dessen Module jedoch einzig hierarchisch verteilt werden können. Module in ConTeXtures lassen sich sowohl hierarchisch wie heterarchisch, d.h. sowohl vertikal wie horizontal organisieren. Damit ist eine weit grössere Flexibilität der Programmierung gewährleistet als dies in anderen bekannten modularen Programmiersystemen, wie etwa der OOP möglich ist. [...]
    Vorteile sind vorab die völlig neue Features der Interaktionalität und Reflektionalität zusätzlich zur distribuierten Form der Parallelisierung von Berechnungen. Damit entseht nicht nur eine erhöhte Performanz, sondern auch eine Steigerung der Sicherheit der Systeme basierend auf der Erhöhung der Direktheit des Zugriffs in der Interaktion mit der Umgebung für Echt-Zeit-Systeme.
    [...]"

    Comment
    The author was totally amazed about our Evoos and worked out in detail many foundational aspects and concepts created with our Evoos, specifically in relation to the fields of cybernetics and logics.
    But virtually all of the following keywords

  • reflection
  • distributed
  • complexity
  • Algorithmic Information Theory (AIT)
  • Fibonacci
  • contexture
  • Scheme, which is a dialect of Lisp virtually not teached at many places on planet Earth, like the university visited by C.S.
  • Marvin Minsky
  • Autonomic Computing (AC)
  • Berechenbarkeit
  • architecture with templates and patterns
  • ontology
  • prototype

    can be found in The Propsal and are even crystal clear evidences (Bingo!!!), which doubtlessly prove that our Evoos was taken as source of inspiration and blueprint, and show an infringement of the rights and properties (e.g. copyright) of C.S..

    For sure, the so-called ConTeXtures are not the first programming paradigm, which meets the requirements of interactional and reflectional complexion.
    It also shows that Evoos is based on or even introduced polycontextural logics.
    By the way: Our first document titled polycontextural logics is saved on the 21st of January 2000 just 5 weeks after the publication of the first version of The Proposal describing our Evoos.

    ARS based programming is interesting in general, but should not be over interpretated as relevant.
    We also have a multiparadigmatic programming approach, but we also wanted an even more poly- or multi-approach, including a {multilogic} approach and also an approach that is not based on an Abstract Syntax Tree (AST) and term rewriting, but a graph and graph rewriting. For example PROGRES.

    We quote a webpage about Linked Data (LD), which was publicated on the 27th of July 2006: "Linked Data
    The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data.
    Like the web of hypertext, the web of data is constructed with documents on the web. However, unlike the web of hypertext, where links are relationships anchors in hypertext documents written in [HyperText Markup Language (]HTML[)], for data they links between arbitrary things described by [Resource Description Format (]RDF[)],. The [Uniform Resource Identifiers (]URIs[)] identify any kind of object or concept. But for HTML or RDF, the same expectations [of behavior] apply to make the web grow:
    1. Use URIs as names for things
    2. Use HTTP URIs so that people can look up those names.
    3. When someone looks up a URI, provide useful information, using the standards (RDF*, [SPARQL Protocol And RDF Query Language (]SPARQL(]))
    4. Include links to other URIs. so that they can discover more things.
    Simple. In fact, though, a surprising amount of data isn't linked in 2006, because of problems with one or more of the steps. This article discusses solutions to these problems, details of implementation, and factors affecting choices about how you publish your data.

    The four rules
    I'll refer to the steps above as rules, but they are expectations of behavior. [...]
    The first rule, to identify things with URIs, is pretty much understood by most people doing semantic web technology. If it doesn't use the universal URI set of symbols, we don't call it Semantic Web.
    The second rule, to use HTTP URIs, is also widely understood. The only deviation has been, since the web started, a constant tendency for people to invent new URI schemes (and sub-schemes within the urn: scheme) such as LSIDs and handles and XRIs and DOIs and so on, for various reasons. Typically, these involve not wanting to commit to the established Domain Name System (DNS) for delegation of authority but to construct something under separate control. Sometimes it has to do with not understanding that HTTP URIs are names (not addresses) and that HTTP name lookup is a complex, powerful and evolving set of standards. [...]
    The third rule, that one should serve information on the web against a URI, is, in 2006, well followed for most ontologies, but, for some reason, not for some major datasets. One can, in general, look up the properties and classes one finds in data, and get information from the RDF, RDFS, and OWL ontologies including the relationships between the terms in the ontology.
    The basic format here for RDF/XML [...]. Large datasets provide a SPARQL query service, but the basic linked data should br be provided as well.
    Many research and evaluation projects in the few years of the Semantic Web technologies produced ontologies, and significant data stores, but the data, if available at all, is buried in a [compressed data] archive somewhere, rather than being accessible on the web as linked data. [...]
    There is also a large and increasing amount of URIs of non-ontology data which can be looked up. Semantic wikis are one example. The "Friend of a [F]riend" (FOAF) and Description of a Project (DOAP) ontologies are used to build social networks across the web. Typical social network portals do not provide links to other sites, nor expose their data in a standard form.
    [...] are two portal web sites which do in fact publish their data in RDF on the web. [...] This means that I can write in my FOAF file that I know [a person 1] by using his URI in the [web site 2] data, and a person or machine browsing that data can then follow that link and find all his friends. [Update:] Also, the [web site 2] allows you to register the RDF URI for yourelf on another site. This means that public data about you from different sites can be linked together into one web, and a person or machine starting with your [web site 2] identity can find the others.
    The fourth rule, to make links elsewhere, is necessary to connect the data we have into a web, a serious, unbounded web in which one can find al kinds of things, just as on the hypertext web we have managed to build.
    In hypertext web sites it is considered generally rather bad etiquette not to link to related external material. The value of your own information is very much a function of what it links to, as well as the inherent value of the information within the web page. So it is also in the Semantic Web.
    So let's look at the ways of linking data, starting with the simplest way of making a link.

    [...]

    Browsable graphs
    So now we have looked at ways of making a link, let's look at the choices of when to make a link.
    One important pattern is a set of data which you can explore as you go link by link by fetching data. Whenever one looks up the URI for a node in the RDF graph, the server returns information about the arcs out of that node, and the arcs in. In other words, it returns any RDF statements in which the term appears as either subject or object.
    Formally, call a graph G browsable if, for the URI of any node in G, if I look up that URI I will be returned information which describes the node, where describing a node means:
    1. Returning all statements where the node is a subject or object; and
    2. Describing all blank nodes attached to the node by one arc.
    (The subgraph returned has been referred to as "minimum Spanning Graph (MSG [@@ref] ) or RDF molecule [@@ref], depending on whether nodes are considered identified if they can be expressed as a path of function, or reverse inverse functional properties. A concise bounded description, which only follows links from subject to object, does not work.)
    In practice, when data is stored in two documents, this means that any RDF statements which relate things in the two files must be repeated in each. So, for example, in my FOAF page I mention that I am a member of the DIG group, and that information is repeated on the DIG group data. Thus, someone starting from the concept of the group can also find out that I am a member. In fact, someone who starts off with my URI can find all the people who are in the same group.

    Limitations on browseable data
    So statements which relate things in the two documents must be repeated in each. This clearly is against the first rule of data storage: don't store the same data in two different places: you will have problems keeping it consistent. This is indeed an issue with browsable data. A set of of completely browsable data with links in both directions has to be completely consistent, and that takes coordination, especially if different authors or different programs are involved.
    We can have completely browsable data, however, where it is automatically generated. The dbview server, for example, provides a browsable virtual documents containing the data from any arbitrary relational database.
    When we have a data from multiple sources, then we have compromises. These are often settled by common sense, asking the question,
    "If someone has the URI of that thing, what relationships to what other objects is it useful to know about?"
    Sometimes, social questions determine the answer. I have links in my FOAF file that I know various people. They don't generally repeat that information in their FOAF files. Someone may say that they know me, which is an assertion which, in the FOAF convention, is theirs to assert, and the reader's to trust or not.
    Other times, the number of arcs makes it impractical. A GPS track gives thousands of times at which my latitude, longitude are known. Every person loading my FOAF file can expect to get my business card information, but not all those trackpoints. It is reasonable to have a pointer from the track (or even each point) to the person whose position is represented, but not the other way.
    One pattern is to have links of a certain property in a separate document. A person's homepage doesn't list all their publications, but instead puts a link to it a separate document listing them. There is an understanding that foaf:made gives a work of some sort, but foaf:pubs points to a document giving a list of works. Thus, someone searching for something foaf:made link would do well to follow a foaf:pubs link. It might be useful to formalize the notion with a statement like
    foaf:made link:listDocumentProperty foaf:pubs.
    in one of the ontologies.

    Query services
    Sometimes the sheer volume of data makes serving it as lots of files possible, but cumbersome for efficient remote queries over the dataset. In this case, it seems reasonable to provide a SPARQL query service. To make the data be effectively linked, someone who only has the URI of something must be able to find their way the SPARQL endpoint.
    Here again the HTTP 303 response can be used, to refer the enquirer to a document with metadata about which query service endpoints can provide what information about which classes of URIs.
    Vocabularies for doing this have not yet been standardized.

    [...]

    Conclusion
    Linked data is essential to actually connect the semantic web. It is quite easy to do with a little thought, and becomes second nature. Various common sense considerations determine when to make a link and when not to.
    The Tabulator client (running in a suitable browser) allows you to browse linked data using the above conventions, and can be used to check that your linked data works.
    [...]

    Followup
    2006-02 Rob Crowell adapts Dan Connolly's DBView (2004) which maps SQL data into linked RDF, adding backlinks.
    2006-09-05 Chris Bizer et al adapt D2R Server to provide a linked data view of a database.
    2006-10-10 Chris Bizer et al produce the Semantic Web Client Library, "Technically, the library represents handles the Semantic Web as like a single Jena RDF graph or Jena Model." The code feteches fetches web documents as needed to answer queries.
    2007-01-15 Yves Raimond has produced a Semantic Web client for SWI prolog wit[h] similar functionality.
    [...]"

    Comment
    First of all, we note that ontologies are not the same like data stores and what is called Linked Data (LD).
    We also note that data and links between data are separated from query and inference mechanisms (e.g. services) in relation to LD.

    At this point, one can already see that the demand for Linked Data was made as the minimal step to get an foundational part of our Evoos, dynamic ontology, ontological knowledge base, Dynamic Semantic Web (DSW).
    But this approach to lay a web of links, web of linked data, or simply web of data, a web of meaning, and a web of RDF documents over the web of hypertext documents (see specifically the fourth rule once again) has multiple deficits and shortcomings from the conceptual, social, and technical point of view.
    Eventually, that is bad software design and architecture and only shifts the chaos from the syntactic level respectively the web of hypertext documents to the semantic layer respectively the web of RDF documents.
    Obviously, there is no clue about Logische System der Informatik==Logical Systems of Informatics or Information science (LSI) (see H. Thiele) and hence there is no formal semantics and mathematical structure, because with such an order the deficits would have been seen directly. Luckily, C.S. was already there in 1999 when the same author proposed the Semantic (World Wide) Web (SWWW). Eventually, one of the best approaches is to begin with an ontological empty set or Ontologic Zero or Ontologic Null, or Zero Ontology or Null Ontology.

    See once again the Investigations::Multimedia, AI and Knowledge management Linked Open Data (LOD) Special and Investigations::AI and Knowledge management of the 31st of December 2013.

    Howsoever, we have Evoos at the bottom and the WWW with a web browser and the SWWW at the top of the overall system stack.
    In addition, we have the Internet at the bottom, and middleware and frameworks for programming languages (e.g. Java), Agent-Oriented Programming (AOP), and Agent-Based System (ABS) in the middle.
    We also have links between

  • Logic Programming (e.g. Prolog), database, and Simple Query Language (SQL), and
  • linked data and RDF

    for querying and browsing the SWWW.

    But the question is why this functionality is not integrated into the middleware. But we already have Evoos instead of the middleware and hence we already have this functionality already on the bottom layer. Even better, at this point we concluded that it has to be in the networking as well and not only in the computing in the sense of a Semantic Internet (SI) or simply Semantic Net (SN). Oh, wait a minute there was something with SN already. :D
    In a simultaneous step we took the connection between

  • Agent Communication Language (ACL) and messaging with or without ontology and
  • Natural Language Processing (NLP)

    to get rid of that reduced and hardly understandable baby language or gibberish of RDF, the slashes and dots of an URI, and so on.
    Other steps are explained in other comments of this Clarification and in other Clarifications referenced herein and in related publications of us.

    We quote an online encyclopedia about the subject knowledge graph, which was publicated on the 29th of June 2020 (first version): "A knowledge graph is a collection of interlinked descriptions of entities - real-world objects, events, situations or abstract concepts. It may include large volumes of factual information with semantics less formal than a traditional ontology. In some contexts, the term may refer to any knowledge base that is represented as a graph.
    Knowledge graphs are often defined in association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing to describe representations of knowledge that focus on the connections between concepts and entities.[1]"

    We quote an online encyclopedia about the subject knowledge graph, which was publicated on the 1st of May 2022 (actual version): "[...] They are also prominently associated with and used by search engines such as Google, Bing, and Yahoo; knowledge-engines and question-answering services such as WolframAlpha, Apple's Siri, and Amazon Alexa; and social networks such as LinkedIn and [Meta (]Facebook[)].

    History
    [...] In subsequent decades, the distinction between semantic networks and knowledge graphs was blurred.
    [...]

    Definitions
    There is no single commonly accepted definition of a knowledge graph. Most definitions view the topic through a Semantic Web lens and include these features:[11 [Knowledge Graphs [24th of January 2021]]]

  • Flexible relations among knowledge in topical domains: A knowledge graph (i) defines abstract classes and relations of entities in a schema, (ii) mainly describes real world entities and their interrelations, organized in a graph, (iii) allows for potentially interrelating arbitrary entities with each other, and (iv) covers various topical domains.[12 [Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods. [2016 and 2017]]
  • General structure: A network of entities, their semantic types, properties, and relationships.[13][14 [What is a Knowledge Graph?-Ontotext"]
  • Supporting reasoning over inferred ontologies: A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge.[2]

    There are, however, many knowledge graph representations for which some of these features are not relevant. For those knowledge graphs this simpler definition may be more useful:

  • A digital structure that represents knowledge as concepts and the relationships between them (facts). A knowledge graph can include an ontology that allows both humans and machines to understand and reason about its contents.[15 [The Knowledge Graph about Knowledge Graphs". 2020.]]
    [...]
    Using a knowledge graph for reasoning over data [] Main article: Ontology (information science)
    [...]"

    Comment
    Interestingly, the first version of this webpage was publicated only on the 29th of June 2020.

    For sure, this webpage of that online encyclopedia also provides not all relevant facts and even has been manipulated, specifically by entities, that want to steal parts of our Ontologic System (OS), in this case the so-called KG.
    Even more worse, most if not all of those fraudulent entities have taken that online encyclopedia as basis for their Linked Data (LD), specifically Linked Open Data (LOD), for sure in the course of creating an alternative reality and simulating a technological progress, which does not include C.S. and our corporation.

    We also quote a document about the field of our Knowledge Graph (KG): "Towards a Definition of Knowledge Graphs
    Abstract
    Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. [...] no official documentation about the used methods exists

    Introduction
    Considerable research into knowledge graphs (KGs) has been carried out in recent years, especially in the Semantic Web community, and thus a variety of partially contradicting definitions and descriptions has emerged. The often quoted blog entry by Google [18 [Introducing the Knowledge Graph: Things, not Strings, May 2012]] basically describes an enhancement of their search engine with semantics. And also [an online encyclopedia ...] does not provide information about knowledge graphs in general, but refers to the implementation by Google without mentioning the existence of other knowledge graphs. [...] Other definitions may lead to the assumption that knowledge graph is a synonym for any graph-based knowledge representation (cf. [12 [Journal of Web Semantics: Special Issue on Knowledge Graphs. [August 2016]], 16 [ Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods. 2016]]). [...]
    [...]

    Selected Definitions Knowledge graphs have been in the focus of research since 2012 resulting in a wide variety of published descriptions and definitions. Table 1 lists representative definitions and demonstrates the lack of a common core, a fact that is also indicated by [...] [16] in 2015. [...] a more precise definition was hard to find at that point [...].
    "A knowledge graph (i) mainly describes real world entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other and (iv) covers various topical domains." [...][16]
    "Knowledge graphs are large networks of entities, their semantic types, properties, and relationships between entities." [...][12]
    "Knowledge graphs could be envisaged as a network of all kind things which are relevant to a specific domain or to an organization. They are not limited to abstract concepts and relations but can also contain instances of things like documents and datasets." [A] Company [...]
    "We define a Knowledge Graph as an RDF graph. An RDF graph consists of a set of RDF triples where each RDF triple (s, p, o) is an ordered set of the following RDF terms: a subject s [...], a predicate p [...], and an object [...]. An RDF term is either a URI u[...], a blank node b [...], or a literal l [...]. [...]
    [...]
    [...] definitions could equally well describe an ontology or - even more generally - any kind of semantic knowledge representation and do not even enforce a graph structure. [...] defined a knowledge graph as an Resource Description Framework (RDF) graph and stated that the term KG was coined by Google to describe any graph-based knowledge base (KB) [...]. [...] Unlike the other definitions, which focus solely on the inner structure of the KG, they highlighted the importance of an automatic extraction system. In the preface of the 13th International Semantic Web Conference Proceedings (2014), the following statement was published:
    Significantly, major companies, such as Google, Yahoo, Microsoft, and Facebook, have created their own \knowledge graphs"that power semantic searches and enable smarter processing and delivery of data: The use of these knowledge graphs is now the norm rather than the exception. [14]
    Once again, this highlights the demand for a common definition, because it is necessary to define and differentiate KGs from other concepts in order to make valuable and accurate statements about the introduction and dissemination of knowledge graphs. Furthermore, this ISWC statement proclaims the use of knowledge graphs to be the norm in general, instead of restricting the scope, domain, or application area where KGs can be used beneficially and efficiently. [...]

    Knowledge Graph Applications
    In the 1980s, researchers from the University of Groningen and the University of Twente in the Netherlands initially introduced the term knowledge graph to formally describe their knowledge-based system that integrates knowledge from different sources for representing natural language [10, 15]. The authors proposed KGs with a limited set of relations and focus on qualitative modeling including human interaction, which clearly contrasts with the idea of KGs that has been widely discussed in recent years.
    In 2012, Google introduced the Knowledge Graph as a semantic enhancement of Google's search function that does not match strings, but enables searching for \things", in other words, real-world objects [18]. [...] the blog post does not provide any implementation details [...]. Since 2012, the term knowledge graph is also used to describe a family of applications. [...] [Google's] Freebase, [...] Google's Knowledge Vault, Microsoft's Satori and Facebook's entity graph [...]. Those applications differ in their characteristics, such as architecture, operational purpose, and technology used, which makes it difficult to find a consensus and to create a definition of knowledge graph. The lowest common denominator of the listed open source applications is their use of Linked Data, whereas hardly any proven information is available about Satori and the entity graph.
    [...] companies seek to describe a similar model that extracts and stores diverse enterprise data in a triple store and analyzes it by using machine learning techniques in order to acquire new knowledge from the data and to reuse it in other applications.

    Terminological Analysis And Definition
    [...] The second problem leads to the misleading assumption that the term knowledge graph is a synonym for knowledge base, which is itself often used as synonym for ontology. An example of such confusion is that both [Google's] Knowledge Vault and Google's Knowledge Graph have been called large-scale knowledge base by their respective creators [5 [Knowledge Vault: A Web-scale Approach to Probabilistic Knowledge Fusion. 2014]]. [...] Based on this information, their understanding of a knowledge graph is the cleaned knowledge base that is the population (e.g., instances) of their ontology. [...] a knowledge-based system uses artificial intelligence to solve problems, and it consists of two parts: a knowledge base and an inference engine. The knowledge base is a dataset with formal semantics that can contain different kinds of knowledge, for example, rules, facts, axioms, definitions, statements, and primitives [4 [Semantic Web Technologies: Trends and Research in Ontology-based Systems. 2006]]. Thus, Knowledge Vault cannot be classified as a true knowledge base, because it extends the idea of a pure semantic store with reasoning capabilities and therefore bears more resemblance to a knowledge-based system.
    An ontology is as a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity [9]. Ontological representations allow semantic modeling of knowledge, and are therefore commonly used as knowledge bases in artificial intelligence (AI) applications, for example, in the context of knowledge-based systems. Application of an ontology as knowledge base facilitates validation of semantic relationships and derivation of conclusions from known facts for inference (i.e., reasoning) [9]. We explicitly emphasize that an ontology does not differ from a knowledge base, although ontologies are sometimes erroneously classified as being at the same level as database schemas [...]. In fact, an ontology consists not only of classes and properties [...], but can also hold instances (i.e., the population of the ontology).
    [...] other contributors have pointed out that knowledge graphs are somehow superior to ontologies [3 [From Taxonomies over Ontologies to Knowledge Graphs, July 2014] and provide additional features. Thus, the difference between a knowledge graph and an ontology could be interpreted either as a matter of quantity (e.g., a large ontology), or of extended requirements (e.g., a built-in reasoner that allows new knowledge to be derived).
    The second interpretation leads to the assumption that a knowledge graph is a knowledge-based system that contains a knowledge base and a reasoning engine. Focusing on existing automatically generated \knowledge graphs", we can identify a further essential characteristic: collection, extraction, and integration of information from external sources extends a pure knowledge-based system with the concept of integration systems. Most open source applications listed in Section 3 implement the integration aspect with Linked Data.

    Figure 1: Architecture of a knowledge graph

    Figure 1 illustrates the combination of these assumptions, which yields an abstract knowledge graph architecture. Based on this architecture and derived from the terminological analysis, we define a knowledge graph as follows:
    A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge.
    This definition aligns with the assumption that a knowledge graph is somehow superior and more complex than a knowledge base (e.g., an ontology) because it applies a reasoning engine to generate new knowledge and integrates one or more information sources. Consequently, a manually created knowledge graph that does not support integration aspects is a plain knowledge base or knowledge-based system if it provides reasoning capabilities. [...] reasoning capabilities are highlighted as an essential characteristic to derive new knowledge and differentiate a KG from knowledge bases.
    [...] Hardly any information is available on the technologies applied in Google's Knowledge Graph and Microsoft's Satori, but Yahoo's Spark and the [Google's] Knowledge Vault apparently use Semantic Web standards such as RDF. Considering the layers of the Semantic Web, a knowledge graph, in comparison, deploys either exactly the same technology for every layer or a similar one that offers the same features. [...] In conclusion, the Semantic Web could could not be interpreted as the most comprehensive knowledge graph, or - conversely - a knowledge graph that crawls the entire web could be interpreted as self-contained Semantic Web.

    Conclusion
    Graph-based knowledge representation has been researched for decades and the term knowledge graph does not constitute a new technology. Rather, it is a buzzword reinvented by Google and adopted by other companies and academia to describe different knowledge representation applications. [...] Taking into account the diverse applications, a KG bears more resemblance to an abstract framework than to a mathematical structure. [...]"

    Comment
    Obviously, a part of our Ontologic System (OS) is discussed and even taken as blueprint, but simultaneously the plagiarist the public is misled at several places for example when it comes to the origin, Semantic (World Wide) Web (SWWW), proposed definition and architecture, abstract framework and mathematical structure.
    From a legal point of view this goes beyond a copyright infringement and cannot be played down in the scope of a scientifical work. It remains even a criminal act as is the case of all other entities named in that document.

    In fact, a formal semantics or mathematical structure is required for the field of Agent-Based System (ABS) (see Michael J. Wooldridge et al and for example the document titled "Agent Programming in 3APL") and for the field of ontology-based system the same reasons for such a requirement applies as well.
    We also added the logical or mathematical foundation with our

  • Zero Ontology or Null Ontology, and
  • (hyper)graph representing the sets of the meta level, etc. (see also for example the Clarification of the 13th of April 2022)

    in addition to the various general types of logics and particular formal semantics of integrated systems.

    But:
    In the Investigations::Multimedia Android Special of the 8th of July 2013 we said "knowledge graph (foundational semantic data model of Ontologic System (OS) architecture)".
    In the Clarification of the 4th of May 2013 we also said "10. Now, it can be seen more and more that all available intelligent personal assistant systems for mobile devices, but also for stationary devices, follow our Ontologic System architectures, that is a knowledge graph, which is efficiently stored in a database, automatically updated with new facts, inclusive the user's habits, and also ordered accordingly, and an even more comfortable and also proactive retrieval system that utilizes all data stored, inclusive the learned data about the user."
    See also the

  • Investigations::Multimedia, AI and Knowledge management Linked Open Data (LOD) Special and Investigations::AI and Knowledge management of the 31st of December 2013.
  • Investigations::Multimedia, AI and Knowledge management of the 21st of April 2016,
  • Investigations::Multimedia, AI and Knowledge management of the 9th of December 2016,
  • Investigations::Multimedia, AI and Knowledge management of the 10th of December 2016, and also
  • Clarification of the 14th of December 2016, and
  • Clarification of the 18th of December 2016.
    In the OntoLix and OntoLinux Further steps of the 4th of October 2017 we also recalled the following: "
  • We also explained many years ago that our Ontologic System is based on
    • "[t]he Semantic (World Wide) Web effort [that] provides standards and technologies for the definition, exchange, and syndication of Metadata, Topic Maps, Ontologies, and Ontologics, and for collaborative information, knowledge, and process management, as well as the creation of open intelligent collaborative virtual environments" (see the webpage Introduction of the website of OntoLinux), and
    • dynamic graphs or hypergraphs with graph rewriting, like for example a data model compatible with a semantic structure or knowledge representation formalism such as Conceptual Graph (CG), Semantic Network (SN), Topic Map (TM), Resource Description Framework (RDF) Graph (RDFG), Web Ontology Language (OWL) Graph (OWLG), Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), etc. for storing, presenting, searching, retrieving, and interchanging knowledge, that resembles the structure and function of a brain with its neurons and synapses (see the Clarification #2 of the 10th of September 2012, and also the Ontonics Further steps of the 27th of September 2017 for example),

    which taken together also leads us straight to the so-called Dynamic Semantic Web (DSW) [(World Wide Web (WWW) available or present vs. DSW for the attention of) and what later became the field of Linked Data (LD) based on the attributes of dynamic evolving systems respectively the basic properties of our Evolutionary operating system (Evoos) evolution and metamorphism, co-creation and self-modification, and also ontology], and the fields of cybernetic ontology, proper ontology, and philosophy, and eventually back again to our all encompasing field of Ontonics and Ontologic(-Oriented) (OO 3) paradigma, Ontologic System (OS), and Ontoscope (Os).

  • We also have the OntoGraphics and OntoScope software components based on an Ontologic Scene Graph (see the Comment of the Day of the 27th of January 2010) for representing 3D semantic object models.
  • We also described a specific start configuration of our OS and the servers of our ON that are operated by our OS (see the Ontologic Net and Ontologic Web Further steps of the 11th of May 2016).
  • We also said that as one resulting functionality based on the features of the OS a user can "Speak with the Web" and "Talk to the Web" directly (see the Comment of the Day of the 5th of May 2016 and Ontologic Web Further steps of the 9th of December 2016).
  • We also said that conceptually domain names are not needed anymore (see the Ontologic Net Further steps of the 5th of July 2017 and the Ontologic Web Further steps of the 6th of July 2017)."

    Somehow we have the impression that the large companies gave no detailed description to avoid a copyright infringement. But this does not work, because of the understanding and perception of the broad public, who recognize our original and unique work of art in the plagiarisms which equal our expression of idea, and the multiple integrations with other properties of our OS, which provides evidence for using OS as blueprint ....

    Somehow the term Knowledge Graph (KG) is misleading, because

  • on the one hand it is just only a Graph-Based Knowledge Base (GBKB) and
  • on the other hand it is claimed to have a superior characteristic, which means a Knowledge-Based System or Knowledge Base System (KBS).

    operational ontology respectively ontologics and Ontologic Computing (OC)

  • Ontology-Based Knowledge Base (OBKB) or Ontologic Knowledge Base (OKB or OntoKB), which
    • on the one hand is for "enumeration questions are answered by deductive reasoning within the ontological knowledge base modeled in [Web Ontology Language (]OWL[) ...] representing the static but very rich implicit knowledge that can be retrieved" and also "[o]ntological structures that change over time vastly enhance the representation capabilities of dialog management structures, or other structures like queries from which relevant features can also be extracted", and
    • on the other other hand leads to the next example, that is the failed attempt to steal our Ontoscope by the kleptomanics of the DFKI in the year 2006 (for example no Immobile Robotic System (ImRS or Immobot), smart camera, or Mixed Reality (MR) and eXtended Reality (XR)),

    or

  • integration of semantic structures or knowledge representation formalisms for
    • storing, representing, searching, retrieving, reasoning, and interchanging knowledge, including for example
      • index,
      • thesaurus,
      • taxonomy,
      • Conceptual Graph (CG),
      • Semantic Net (SN),
      • Topic Map (TM), and
      • ontology,

      and

    • implementing data, information, knowledge systems, including for example
      • Polygonal DataBase (PDB),
      • Knowledge Base (KB), and
      • Ontology-Based or ontological Knowledge Base (OBKB or oKB),

    and also

  • reasoning or inference engine

    to our static and dynamic polygonal Knowledge Base (KB) variants with the basic properties of

  • contextuality,
  • flexibility, and
  • agility, and also
  • continuity and
  • automaticity, as well as
  • operational semantics

    respectively Ontologic Knowledge Base (OKB or OntoKB).

    Indeed, Multi-Agent Belief Revision (MABR) is based on Agent Communication Language (ACL) based on ontology-based messaging, etc., and is dynamic with continouse updating of belief base.

    But

  • ontology (wrongly) viewed as KB,
  • no triple store or Graph-Based DB,
  • no transactional RDF store,
  • no Graph-Based KB or KG,
  • no KBS,
  • no Semantic (World Wide) Web (SWWW), RDF, RDF-S, OWL,
  • no Computational Linguistics respectively NLP and NLU,
  • no NMP,
  • no text mining and text analytics,
  • no this and that on top of the agents' belief base other than MAS Belief Desire Intention (BDI) and FIPA.

    Also keep in mind that it came suspiciously at the same time when C.S. created Evoos and was also only described as a general concept at that time (see also the Clarification of the 13th of April 2022).

    With our Evoos we already have a hybrid agent architecture and a holonic agent architecture and a logic system architecture and an operating system architecture, and Bionics (e.g. AI, ML, ANN, etc.) on the reactive layer and ontology on the deliberative layer and social layer of the overall Evoos Architecture, including directly or indirectly all these semantic structures listed above.
    This is already the foundation of the OKB and KG 2012.

    Obviously, the field of KG already is a semantical or ontological data set respectively knowledge representation with ontologics by definition and is also tigthly related to, connected with, or even based on our Ontologic System (OS), specifically when one looks at the

  • integrated foundations, which are Ontologic System Components (OSC), and also
  • applications and services, which all seem to be or even are Ontologic Applications and Ontologic Services (OAOS),

    which has been described by artwork and technology licensing partners of our SOPR and other authors as follows:

  • "A knowledge graph acquires and integrates information into an ontology [erroneously classified as being at the same level as a Knowledge Base (KB)] and applies a reasoner to derive new knowledge [viewed as Knowledge Base (KB) with reasoning or inference engine respectively Knowledge-Based System (KBS)] [viewed as knowledge-based system with integration system and even operating system respectively our OntoBot, OntoGraph, OntoBase, and OntoFS].
    [...]
    In conclusion, the Semantic Web could be interpreted as the most comprehensive knowledge graph, or - conversely - a knowledge graph that crawls the entire web could be interpreted as self-contained Semantic Web. [Wrong, SWWW is to markup the WWW content with semantics and Semantic Web Services (SWS) are for using these markups with autonomous agents, which are not KGs; misleading the public away from our original OS, which changes SWWW and SWS completely]"
  • "Semantic technologies make it possible to create the data model required "on the fly", to integrate multiple data sources, modify and add new relationships and data sources as needed."
  • "To make data smart, the machines needed to be no longer bound by inflexible data schemas defined 'a priori'."
  • "Key Lessons
    • [...] Ontologies are capable for semantic integration and reasoning
    • Semantic Integration is achieved at two levels :
      • Raw Competitive Intelligence data can be mapped (Instance mapping) to Ontologies using [Natural Language Processing (]NLP[)] techniques
      • Efficient semantic integration by using RDF and OWL
    • Powerful complex data modeling achieved by using graph principles inherent in RDF
    • Ontology development tools [...] can be used for manual ontology enrichment
    • [(RDF)] Triple stores [...] have sufficient inference engines for reasoning
    • Knowledge Discovery is possible with inference and reasoning on Competitive Intelligence data
    • Easy translation of questions to graph queries using SPARQL

    [...]

  • Overall Conclusion of Project
      Semantic Integration (instance mapping using NLP) coupled with RDF data model was successful in answering questions in Competitive Intelligence
    • Ontologies provide a powerful framework in providing dictionaries and taxonomical relations that help to reason and inference the data for knowledge discovery
    • Manual curation is a tedious, error prone and labor intensive-task
      • A semi-automated intelligent computer-based solution that utilizes Ontologies, Semantic Integration and NLP could drastically reduce manual curation process and maintain high quality information"
  • "Ontology-driven Applications Using Adaptive Ontologies, and semantic software components"
  • "Our proprietary knowledge automation engine uses advanced AI to deliver 90% automation with minimal up front configuration work. The engine combines a semantic knowledge graph and multiple AI techniques including machine learning, reinforcement learning, [Natural Language Processing (]NLP[)] and [automated reasoning, also called] machine reasoning:
    • Machine reasoning to define rules and provide control over the automation
    • Machine learning [...]
    • Reinforcement learning [...]
    • NLP to teach the engine new knowledge - and a semantic knowledge graph to represent and store for the process being automated".
  • "In 2019, IEEE combined its annual international conferences on "Big Knowledge" and "Data Mining and Intelligent Computing" into the International Conference on Knowledge Graph[...]."

    Also note the relations of KG to the fields of Recommender System (RS or RecS) and Common Sense Computing (CSC), which supports our claims even more.
    relation of Linked Data (LD) to CSC (e.g. wiki, online encyclopedia, Web 2.0)
    In addition to the techniques of the field of NLP also the fields of text mining and text analytics, and also other subfields and their techniques of the field of Natural Multimodal Processing (NMP) are utilized in relation to KG.

    But somehow many explanations seem to be the results of opportunistic goals and subjective opinions.
    Especially, the

  • lack of informations and definitions by companies, such as Alphabet (Google), Microsoft, and others, about their Knowledge Graphs (KGs), and
  • multitude of different and often even wrong informations and definitions by research institutes, companies, and individuals about the subject matter KG

    show that they do know our Ontologic System (OS) and have also stolen this original and unique foundational features of our OS through all the years, as we always said, explained, and documented, and have now proven as well.

    Despite the exemplary OntoGraph model elements show ontological instances (e.g. names of persons) (see the related images on the webpage of the OntoBlender component), the classification of an ontology as being at the same level as a KB is an error, because an ontology

  • is of a higher order,
  • is an explicit
    • "[set of] representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities, which substantiate one, many, or all domains of discourse",
    • metadata structure, which reflects the ontology of one or more domains, environments, or systems from which it was created, and
    • "formal specification of a shared conceptualization",
  • is used on the metalevel of one or more domains of discourse as abstraction and self-reference in the fields of philosophy, logics, and also computational ontology, computer science, and information science,
  • has "semantics[, that] try to mimic the real world as closely as possible" in,
  • emphasizes reuse and not a specific application,
  • "contains rules and constraints, usually expressed in logic, that model domain knowledge", and
  • has "also a close connection to natural language", but
  • does not include ontological instances respectively is not populated.

    The latter is a mistake that comes from entities, that tried to jump on the bandwagon of ontology for example in the field of Agent-Based System (ABS) and declared all kinds of KBs as ontologies (e.g. upper model ontology, foundational ontology, sub-ontology, management ontology, domain-specific ontology, individual domain ontology, integrated ontology, etc.), which simply renders them as a Conceptual Graph (CG), Semantic Network (SN), Topic Map (TM), or another type of semantical data set respectively semantic structure or knowledge representation formalism, and eventually as obsolete or only a synonymous term for one or all of them.
    The latter can also be seen with the erroneous classification of an ontology as a DataBase (DB) schema, because an ontology is not a DB schema, but can be easily and automatically transformed 1:1 into such under certain circumstances, and because an ontology is not a KB, it is also not a KG.
    And with the discussion about the field of Knowledge Graph (KG) (see the explanations and proposed definitions above) we have the next attempt of entities to jump on the bandwagon of ontology.

    "In conclusion, the Semantic Web could be interpreted as the most comprehensive knowledge graph [...]"
    Viewing and calling the Semantic (World Wide) Web (SWWW) a Knowledge Graph (KG) on the one hand also seems to be the result of opportunistic goals and subjective opinions, specifically to

  • attack the legal scope of the rights and properties (e.g. copyright),
  • harm the reputations and social standing,
  • mislead the public about the performances and achievements,
  • damage the goals and even threaten the integrities

    of C.S. and our corporation, but on the other also support and even prove our claims.

    But SWWW is only viewed as a

  • universal graph of RDF triples formed from all of the RDF documents on the web and
  • worldwide Knowledge-Based System (KBS)

    and correspondingly also called

  • web of linked data or simply web of data, and web of RDF documents in contrast to web of hypertext documents, and
  • knowledge web.

    But Linked Data is basically and only the connection of the KBs respectively the RDF graphs of the SWWW as an additional form of (horizontal) organization based on the Dynamic Semantic Web (DSW) based on our Evolutionary operating system (Evoos), but not a KBS in itself, which can be built on top of LD structures resepectively by utilizing the SWWW.
    Therefore once again, the SWWW is only an RDF graph-based KBS and ontology-based or ontological KB, but not an Ontologic KB (OKB or OntoKB).

    The emphasized notion and view of the graph-based properties plus

  • attributes of static or stable, synchronic system: flexibility, speed, security, transformation,
  • attributes of dynamic evolving system: evolution and metamorphism, co-creation, and self-modification,
  • dynamization, including the dynamization of ontology and the SWWW respectively the foundation of DSW, including the foundation of Linked Data (LD),
  • semantic graph-based structure as the foundation or substrate for computing, and networking,
  • graph-based processing, rewriting, and storing, computing, and networking,
  • polycontexturality, including polycontextural logics, allowing evolution and metamorphism, for example of DSW,
  • polygonality,
  • ontogonality,
  • integration of all-in-one,

    and also the fields of

  • linguistic instruments in Knowledge Engineering (KE), including multilingual linguistics,
  • Natural Multimodal Processing (NMP), including Natural Language Processing (NLP),
  • text data mining or simply text mining, and text analytics,
  • Common Sense Computing (CSC),
  • everything else that makes a KG superior to semantic knowlege representation, KB, KBS, ontology,
  • and so on

    came with our Evoos and our OS and from no other entity.

    Obviously, a foundational part of our Ontologic Web (OW) or Universal Brain Space or Global Brain 2.0 (see the Clarification of the 21st of September 2021), and formerly Global Brain Grid with our

  • Content-Addressable Memory (CAM), Content-Addressable Storage (CAS), and Content-Addressable Network (CAN) Peer-to-Peer (P2P) computing,
  • Scalable Content-Addressable Network (SCAN),
  • Information-Centric Networking (ICN),
  • and so on.

    In fact, so to say a global KG is ... (see the Ontonics, OntoLix and OntoLinux Further steps of the 2nd of May 2016, Clarification of the {about name server} and the Virtual Object System (VOS)).

    It seems to be that at least the foundation of KG respectively Ontologic KB (OKB or OntoKB) is a part of our OS with its Ontologic System Components (OSC) and Ontologic System Architecture (OSA), which integrates all in one.

    Obviously, an essential part of our OS (see also the related note OS too large to steal of the 15th of May 2016).

    Howsoever, before we can make a decision in compliance with the Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions of our SOPR, we must look at some more details.

    Actually, we are reading all that nonsense publicated since the year 2012 in relation to KG and related nonsense publicated since the year 2007 in relation to Linked Data (LD) and Big Data (BD).
    But we can already tell once again that LD and KG are exactly what we

  • anticipated around the year 2005,
  • explained in relation to the failed attempt to steal our Ontology-based or Ontologic World Wide Web (OWWW) with the Semantic (World Wide) Web SWWW) and that nonsense with the substitution of the eXtensible Markup Language (XML) with the JavaScript Object Notation (JSON) and semantically annoted HTML and XHTML, and also JSON by adding SWWW technologies (e.g. RDF in attributes (RDFa), embedded RDF (eRDF), Microformats, Microdata, JSON-LD, etc.), and another attempt to steal something related to our OWWW based on Schema.org, and
  • said that it should not happen at all: a(nother) big pile of semantic bull$#!+.

    We can also tell that something, or better said some kind of subsystem, platform, or fabric of the infrastructures of our SOPR is required, which unites all those NMPSystems, KBs, KGs, MUIs, and IPAs, and also other OAOS

  • alone to hook the individual NMPSs, KBs, KGs, MUIs, and IPAs into and
  • together to provide basic, which is common to all members

    in addition to what we already have with the subsystems and platforms, as well as fabrics and spaces of the infrastructures of our SOPR.

    Furthermore, we can tell that fraudulent endeavours like the schema.org is dead and either goes to our SOPR, the webarchive, or a void. We have already started before our Ontologics.info website for exactly the same reason (see also the Ontologics.info Further steps of the 26th of August 2013).
    The same holds with that illegal plagiarism of a General Artificial Intelligence (GAI).

    The same holds for Linked Open Data (LOD), which also either goes to our SOPR, the webarchive, or a void.
    We will have a common KG.

    But we already think that we got a clear cut that will make the legal teams of the governments and the industrial companies happy.
    Simply said,

  • integration of
  • ontology,
  • hybrid agent architecture,
  • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),
  • Global Brain,
  • Associative Memory (AM) or Associatively-Addressable Memory (AAM) (e.g. Content-Addressable Memory (CAM), BlackBoard (BB) (e.g. Tuple Space (TS)) system, Space-Based technologies (SBx)),
  • Multidimensional Multidomain Multilingual Multiparadigmatic Multimodal Multimedia technologies (M⁶x),
  • Mixed Reality (MR),

    and also

  • resilience,
  • Byzantine resilience protocols,
  • verifiable or verified computing,
  • capability-based security, specifically capability-based operating system,
  • attribute-based cryptosystem,
  • homomorphic cryptosystem,
  • Probabilistically Checkable Proof (PCP) system,
  • searchable cryptosystem,
  • smart contract transaction protocol,
  • blockchain technique,
  • Ontologic Computing (OC),
  • Autonomic Computing (AC),
  • Resource-Oriented Computing (ROC),
  • foundations of
    • microService-Oriented Architecture (mSOA), and
    • os-level virtualization or containerization,
  • fusion of realities,
  • next generations of Internet, World Wide Web, 5G,

    as well as

  • Ontoverse (Ov),
  • etc.

    to name just some few, as already discussed and explained.


    12.May.2022

    Ontonics Further steps

    We improved a module of a system in 2 different structural ways and 1 operational way.


    17.May.2022

    19:22 UTC+2
    Ontonics Further steps

    The third or fourth Killer Product™ respectively New Energy™ Blitz™ of our Ontonics™ Blitz Fund™ I Superbolt™ #4 Electric Power (EP) as part of our New Energy™ revolution after our

  • Ubiquitous Computing (UC or UbiC) and Internet of Things (IoT), and Networked Embedded System (NES), and also Cyber-Physical System (CPS) based on our Ontologic System (OS), including
    • Smart Energy IoT and
    • Smart Grid,

    and also

  • electric energy storage device series and
  • electromagnetic energy collection device series

    might be our device series, which increases the efficiency of energy consumers in the two-digit percentage range depending on the specific utilization.

    Only one of these solutions is already a Game Changer™ in human civilization and sufficient to establish our New World Order (NWO), but this part of the whole oeuvre establishes a New Reality™ (NR), which goes beyond the imagination of most humans.
    And this is only the beginning. Yeah, fascinating. Is not it? :D

    22:15 UTC+2
    Success story continues and no end in sight

    After 16 years, they found out that a so-called Trusted Artificial Intelligence (TAI) or Trustworthy Artificial Intelligence (TAI) is needed. Guess what is taken as foundation? Hint: It is a belief system, which is purely rational and resilient (e.g. trustworthy), and has the basic properties of (mostly) being validated and verified, and specification- and proof-carrying, and a formal semantics, including formal operational semantics.

    Our revolution based on our Evolutionary operating system (Evoos) and Ontologic System (OS) continues and there is no end in sight.
    What comes now is a New Artificial Intelligence (NAI or New AI), because the whole big pile of semantic bull$#!+ must be cleaned up, ordered, and put together again. :D
    As we said, they have stolen our properties, but will not keep them.


    19.May.2022

    Comment of the Day

    "There is no compromise between light and dark, good and evil, love and hate.
    The good will dictate peace to the evil.", [C.S., Today]


    20.May.2022

    15:47, 17:04, 20:48 UTC+2
    OntoLix and OntoLinux Website update

    *** Work in progress - webpage not updated ***

    We added to the section Cybernetics of the webpage Literature of the website of OntoLinux the link to:

  • Institut für Kybernetik und Systemtheorie (ICS), Rudolf Kaehr and Thomas Mahler: Introducing and Modeling Polycontextural Logics

    See the Clarification of the 8th of May 2022 for quotes, comments, discussions, and explanations in the context of our Evolutionary operating system (Evoos) and our Ontologic System (OS), as well as the fields of cybernetics, kenogrammatics, PolyContextural Logics (PCL), semiotics, the Semantic (World Wide) Web (SWWW), Linked Data (LD), Knowledge Graph (KG), Agent-Based System (ABS), Multimodal User Interface (MUI), and other related fields.

    We also marked TUNES OS and Arrow System by *. If we find more evidence for espionage, then we will mark them by **.

    Obviously, the truly interesting and important fruits are hanging much higher in the tree. :)


    26.May.2022

    Comment of the Day

    "The Master said, By nature near together; by practice far apart.", [Analects or Lun Yu of Confucious, book XVII, verse 2, around 475 to 221 B.C.]

    Read also A Confucian approach to human rights.

    A lot of education is required worldwide.


    28.May.2022

    Ontonics Further steps

    We noted that several countries made decisions in the last past, which have known conceptual problems, which again question the success to achieve their aspirational goals.
    In addition, the overall matter is considerably more complex.
    Therefore, we thought about the conceptual problems, looked into our stock, and composed an alternative solution.

    Furthermore, we looked at the energy supply of American, European, African, Asian, and Australian countries once again as part of our design of the future overall infrastructure being realized by our SOPR and our other Societies together with governments, industries, organizations, and other national entities interested and concerned.
    So far we cannot see any problems in

  • North America, and also
  • Portugal,
  • Spain,
  • France,
  • Italy, and
  • Greece,
  • other countries in this area of Europe.

    Depending on the local situation we have developed customized respectively nationalized solutions, which ideally should be realized as planned.

    Specifically in case of the French Republic we have developed a quite interesting solution, which addresses the local situation, to transition to truly Clean and Green™ energy.

    We also cannot see any unresolvable problems in

  • many other countries.

    Indeed, we have to admit the existence of certain social, societal, political, legal, technological, and economical problems due to the dependency on the individual local situation. But most of them can be easily overcome with goodwill, agreement, and commitment.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer