Home → News 2022 June
 
 
News 2022 June
   
 

04.June.2022

Ontonics Superstructure #27

For the location of a South American gigahub of our World Wide Hover Association (WWHA) Transcontinental Network respectively Silk Skyway with its hubs, superhubs, megahubs, and gigahubs we are looking at the city and urban area around

  • Sao Paulo, Brazil.


    06.June.2022

    08:00 and 23:22 UTC+2
    Short summary of clarification

    We have finalized the quoting of other works and are now finalizing the initial commenting of them in the Clarification of the 8th of May 2022.
    Missing are some more comments, more order of the comments, better explanations of our works and better corrections of others nonsense in the comments, and an epilog.

    But several most important results can already be recognized:

  • 1. We are able to provide significant evidences that our Evolutionary operating system (Evoos) with its Evoos Architecture (EosA) described in The Proposal has been taken as source of inspiration and blueprint, which shows a causal link to the original and unique expression of idea presented with The Proposal, specifically in relation to the fields of
    • Agent-Based System (ABS), including
      • Intelligent Agent System (IAS),
      • Cognitive Agent System (CAS), and
      • Multi-Agent System (MAS), including
        • Holonic Multi-Agent System (HMAS) or simply Holonic Agent System (HAS),
    • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),
    • Cognitive Architecture or Cognitive System,
    • Cybernetical Intelligence (CI),
    • evolvable architecture,
    • Semantic (World Wide) Web (SWWW), including Linked Data (LD),
    • fusion of realities,
    • etc..
  • 2. Our Evoos is a work of art and not just a scientific treatise or discourse, or a description of a system, and therefore protected by the copyright in whole or in part.
  • 3. All deficits of other entities regarding expressions of ideas and elements have been confirmed once again.
  • 4. There are no legal loopholes.


    07.June.2022

    12:11 and 14:55 UTC+2
    Short summary of clarification

    More most important results, that can already be recognized:

  • The Arrow System is a bold plagiarism of Gotthard Günther's works in the field of Cybernetics.
    And we also shared our impression about the highly suspicious coincidence with The Proposal. We said to mark it, but now we will simply substitute it with the original works.

    In relation to ontology in general and ontology as used in the fields of Artificial Intelligence (AI), Knowledge Management (KM), Natural Language Processing (NLP), and Semantic (World Wide) Web (SWWW) there is no direct reference by the TUNES OS and our Evoos. But while the TUNES OS focuses on the field of operating system (os), our Evoos adds the fields of

  • Artificial Life (AL),
  • Agent-Based System (ABS),
  • Cognitive System (e.g. Belief-Desire-Intention (BDI), and Cognition and Affect (CogAff) paradigms or architectures),
  • Autonomous System (AS) and Robotic System (RS) (e.g. {model-reflective} Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot) (e.g. Ontology-Based Holonic Multi-Agent System (OBHMAS) or Ontology-Oriented Holonic Agent System (OOHAS))),
  • Ubiquitous Computing (UbiC) or Pervasive Computing (PerC) (e.g. Internet of Things (IoT), Networked Embedded System (NES), Cyber-Physical System (CPS), Industry 4.0 and 5.0, Intelligent Environment (IE) (e.g. Affective Computing (AC or AffC), Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot), etc.), and
  • Mixed Reality (MR) and the fusion of reality and virtuality respectively the foundation of New Reality (NR)

    among many other fields.

    Furthermore, our Evoos is the foundation of the fields of

  • Cybernetical Intelligence (CI or CybI) = Cybernetics + HardBionics (HB) and SoftBionics (SB) (e.g. Artificial Intelligence (AI), Machine Learning, Computational Intelligence (CI or ComI) and Soft Computing (SC), Artificial Life (AL), Agent-Based System (ABS), Swarm Intelligence (SI or SwaI) or Swarm Computing (SC or SwaC), etc.) + Cognitive System,
  • Dynamic SWWW (DSW or DSWWW),
  • microService-Oriented Architecture (mSOA),
  • Software-Defined Networking (SDN),
  • New Reality (NR),
  • and so on.

    This leads us back to the beginning respectively the latest clarifications about the fields of

  • UbiC,
  • Knowledge Representation (KR), Knowledge Base (KB), Knowledge Graph (KG), Knowledge-Based System (KBS),
  • SWWW, Linked Data (LD), and
  • Metaverse (Mv) Multiverse (MvMv or Mv²),
  • Web 3.0, Web3, Web 4.0,
  • and so on.

    Specifically what has been added by our Evoos is what makes the SWWW the DSW the Web x.0 and eventually the failure the success the revolution, as usual when we take the helm.
    And there will be a Universal Ontologic (UO) or Global Ontologic (GO), including a Universal Ontology (UOL) or Global Ontology (GOL), which is evolvable, dynamic, and incorporates parts of all other ontologies, DataBases (DBs), Knowledge Bases (KBs), and so on, which also includes digital maps, digital globes, and other multimodal things, and is common to all.

    If and only if (Iff.) we takeover the company Alphabet (Google) through our Society for Ontological Performance and Reproduction (SOPR) for a truly reasonable price (note damages, fees, royalties, and no payment for our rights and properties), then a part of its Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG) would become the (initial or foundational) GOL. Alternatively, we will build it up from a void or a blank.

    For sure, one can discuss every detail and potential mistake of us in relation to these fields, but at the end of the day the overall situation will not be changed.
    As in the case of the novel titled "The Old Man and the Sea" and written by Ernest Hemingway in the year 1951, the sharks gnawed off the marlin fish completely, but the remaing (endo)skeleton was secured.
    But in the follow-up part written by C.S., the sharks need the skeleton to survive somehow.
    And this is our core and we will not discuss long about the matter with politicians, scientists, managers, and other persons:
    All or nothing at all. Sign, pay. comply.
    See the section Further steps [Inviting letter] of the issue SOPR #33m of the 2nd of May 2022 for more details.


    09.June.2022

    19:10 and 21:00 UTC+2
    Ontonics Further steps

    We have once again adjusted our overall business plan, specifically the investment and development plans and the sequence of steps, to adapt to the latest decisions of governments and industries.
    But this adjustment respectively adaption does not affect the volumes of investments, the sizes of locations, and the counts of jobs.


    11.June.2022

    13:36 and 16:10 UTC+2
    Short summary of clarification

    Now we are into the subject and get it all together again. But it is still a little different. Honestly, we wondered a little about what we were working on the Clarification of the 8th of May 2022 and discussing other works and their relations to our works, but we already referenced the Clarification of the 28th of April 2016. :D

    So we have to correct a lot. At this point we already note that we have moved some content of the Clarification of the 8th of May 2022 to the other Clarification of the 28th of April 2016, because it is also a work in progress and belongs to the earlier clarification.

    Indeed, we have studied the subfields of proemiality and polycontexturality and PolyContextural Logic (PCL) of the field of Cybernetics, but we also studied the fields of Chaos theory or chaos and order, fractality and {?}holoiconicity, holonicity, holonic, holistic, and holologic, when working on our Evolutionary operating system (Evoos) described in The Proposal and The Prototype.

    We already began the discussion in the Clarification of the 28th of April 2016 about the various spectra, such as the

  • spectrum or continuum of chaos and order,
  • spectrum or continuum of non-symbolic or subsymbolic and symbolic,
  • spectrum or continuum of non-determinism and determinism,
  • spectrum or continuum of reactivity and deliberation,
  • spectrum or continuum of subconsicousness and consciousness, and
  • spectrum or continuum of reality and virtuality,

    which others and C.S. showed to be intertwined with each other seamlessly.

    This also points to the three-layer architecture or hybrid architecture in the field of Agent-Based System (ABS) consisting of

  • stateless reactive feedback control,
  • sequencing or reactive plan execution, and
  • deliberative computing or reasoning.

    See for example the following works:

  • Hultman, J., Nyberg, A., Svensson, M.: A software architecture for autonomous systems. 1989.
  • Morin, M., Sandewall, E.: A Software Architecture Supporting Integration of Sub-Systems. 1991.
  • Gat, E.: On Three-Layer Architecture. 1991.
  • Malec, J.: Autonomous robot control using a three-layered architecture. 7th of September 1994.
  • Müller, J.P., Pischel, M.: The Agent Architecutre InteRRaP: Concept and Application. 1993.
  • Works cited therein.

    But our Evoos has a dynamic, reflective, metamorphic, flexible meta-layer structure or architecture, which can have one to infinity many layers as required (at runtime). This is also the reason why we describe the Ontologic System Architecture (OSA) as a special abstration of a layered system architecture, which requires the selection of a specific point in space and time and also a specific view to see its structure, and as being liquid, which also relates it to the atom model (e.g. position of an electron) and the CHemical Abstract Machine (CHAM) for example. See also the fields of Holonic Manufacturing System (HMS) and Holonic Multi-Agent System (HMAS) or Holonic Agent System (HAS). What was missing in 1999 was a Distributed HAS, which is the integration of the fields of Distributed operating system (Dos) and HAS, and which is given with our Evoos.

    We were also engaged with the relations of the fields of semantics to syntax to semiotics to kenogrammatics, which also led us to the fields of proemiality and polycontexturality. But as deeper we went to find a common, ultimative, and universal ground as more esoteric and irrational the matter became. To show the latter we decided to quote works of the related fields broadly and extensively, so that everybody can get this impression for what we mean when talking about too much talks and blah blah blah, too few solutions, no risks to make decisions, and also subjectivity, esoterics, and irrationality.

    Some say the universe is a very big number or so, others say mathematics is the language of the universe, and others say the universe is a computation. At least, one can observe a development and transformation.
    The author of "Derida's Machines" discusses the characteristics and the very nature of numbers.
    But the syntax, signs, and semiotics of number systems and modern mathematics are a fractal. In fact, it does not matter which signs one uses for counting and calculating, the results are always the same, but only the length of the numbers or strings used to describe the numbers and the results of computation become shorter, if one has more signs available to express a result.

    This led to our conclusion that all must be one, including the fields of logics, mathematics, cybernetics and their classical logics, non-classical logics, Fuzzy Logic (FL), Arrow Logic (AL), PolyContextural Logic (PCL), holologic, and so on, and also their objectivity and subjectivity and proemiality. And the only truly rational concept, that meets all aspects and requirements, is not the proemiality, polycontexturality, Arrow System, or similar ... things, ideas, concepts, approaches, theories, and so on, but the relationships are a fractal moving, operating, or executing in its own fractal structure respectively in itself as translations or transformations or morphisms of itself.

    Correspondingly, our Zero Ontology or Ontological Zero does not represent an empty set or zero, like in logics, mathematics, or other fields, as also discussed by the author of "Derida's Machines", but a point or location in the fractal, which can also be interpreted and written as a string of one or more zeros 0 or 00 or 000 or 0...0. It is not a beginning and it is not an ending, it has no beginning and it has no ending, exactly as required by theologies, philosophies, mathematics, cybernetics, and so on. The latter also solves the problems with 0 and infinity, as discussed in cybernetics.
    We also have the duality represented for example with 0 and 1, and we have the interval between or the range from 0 and 1 or the spectrum or continuum of 0 and 1.
    Some examples
    count of signs 2; length 1; decimal (2^0)
    0 (0)

    1 (1) (1 = 2^1 - 1)

    count of signs 2; length 2; decimal (2^1 2^0)
    0 0 (0)
    0 1 (1)

    1 0 (2)
    1 1 (3) (3 = 2^2 - 1)

    count of signs 2; length 3; decimal (2^2 2^1 2^0)
    0 00 (0)
    0 01 (1)
    0 10 (2)
    0 11 (3)

    1 00 (4)
    1 01 (5)
    1 10 (6)
    1 11 (7) (7 = 2^3 - 1)

    count of signs 3; length 1; decimal (3^0)
    0 (0)

    1 (1)

    2 (2 = 3^1 - 1)

    count of signs 3; length 2; decimal (3^1 3^0)
    0 0 (0)
    0 1 (1)
    0 2 (2)

    1 0 (3)
    1 1 (4)
    1 2 (5)

    2 0 (6)
    2 1 (7)
    2 2 (8 = 3^2 - 1)

    count of signs 3; length 3; decimal (3^2 3^1 3^0)
    0 00 (^0)
    0 01 (1)
    0 02 (2)
    0 10 (3)
    0 11 (4)
    0 12 (5)
    0 20 (6)
    0 21 (7)
    0 22 (8)

    1 00 (9)
    1 01 (10)
    1 02 (11)
    1 10 (12)
    1 11 (13)
    1 12 (14)
    1 20 (15)
    1 21 (16)
    1 22 (17)

    2 00 (18)
    2 01 (19)
    2 02 (20)
    2 10 (21)
    2 11 (22)
    2 12 (23)
    2 20 (24)
    2 21 (25)
    2 22 (26 = 3^3 - 1)

    count of signs 10; length 1; decimal (10^0)
    0
    1
    2
    3
    4
    5
    6
    7
    8
    9 (9 = 10^1 - 1)

    count of signs 10; length 2; decimal (10^1 10^0)
    0 0
    0 1
    0 2
    0 3
    0 4
    0 5
    0 6
    0 7
    0 8
    0 9

    ...

    9 0
    9 1
    9 2
    9 3
    9 4
    9 5
    9 6
    9 7
    9 8
    9 9 (99 = 10^2 - 1)

    count of signs 10; length 3; decimal (10^2 10^1 10^0)
    0 00
    0 01
    0 02
    0 03
    0 04
    0 05
    0 06
    0 07
    0 08
    0 09
    ...
    0 90
    0 91
    0 92
    0 93
    0 94
    0 95
    0 96
    0 97
    0 98
    0 99

    1 00
    ...
    1 99

    2 00
    ...
    2 99

    3 00
    ...
    3 99

    4 00
    ...
    4 99

    5 00
    ...
    5 99

    6 00
    ...
    6 99

    7 00
    ...
    7 99

    8 00
    ...
    8 99

    9 90
    9 91
    9 92
    9 93
    9 94
    9 95
    9 96
    9 97
    9 98
    9 99 (999 = 10^3 - 1)

    count of signs 16; lenght n; decimal (16^n-1 ... 16^0)
    max 16^n - 1

    count of signs i; length n; decimal (i^n-1 ... i^0)
    max i^n -1

    And so on.

    Computing with Words (CW or CwW)
      = 0
    a = 1
    b = 2
    c = 3
    ...
    z = ...

    Unicode Standard
    Unicode Transformation Format (UTF), extended American Standard Code for Information Interchange (ASCII), variable-width encoding
    UTF-8

    See also for example

  • "Fuzzy Logic = Computing with Words". May 1996.
  • "From Computing with Numbers to Computing with Words - From Manipulation of Measurements to Manipulation of Perceptions". January 1999, July 1999, 2000, 2002, 2005.

    Abstract Machines (AMs)

    Virtual Machines (VMs)

    Abstract Virtual Machines (AVMs)

    Operating systems (oss)

    And so on.

    But the other question besides the question who created the fractal, mathematics and logics, and life, is where the dynamics come from. Here we have the space and time and ontology with its order what existited before and what came after (see also arrow of time, enthropy, thermodynamics, etc.).

    Furthermore, if one assumes a kind of complexity or an interplay of chaos and order, which is related to fractal structure or fractality, and self-similarity, as shown in for example the book titled

  • "Deterministic Chaos",

    which leads to self-referentiality and self-organization, as shown in for example the books titled

  • "The Global Dynamics of Cellular Automata",
  • "The Origins of Order [] Self-Organization and Selection in Evolution", and
  • "The Ambidextrous Universe [] Symmetry and Asymmetry from Mirror Reflections to Superstrings",

    and leads further to the contents of for example the books titled

  • "The Computational Beauty of Nature [] Computer Explorations of Fractals, Chaos, Complex Systems, and Adapation",
  • "The Blind Watchmaker [] [...]", which was written by the author of the books titled "The Selfish Gene" and "The God Delusion", and the watchmaker analogy was also the source of inspiration for calling our creation Calibre or Caliber, but in the sense of the highly complex raw ur-movement,
  • "Complex Systems and Cognitive Processes", and
  • "The Emergent Ego: Complexity and Coevolution in the Psychoanalytic Process"

    and also the works referenced in the

  • webpage Literature,
  • related clarifications, such as the Clarification of the 28th of April 2016, and
  • webpage Links to Software, specifically the sections

    then this observable universum must be the part of at least one of all possible universes, which has a fractal structure. This also explains why the Fibonacci sequence and the Golden Ratio Phi are so interesting, indeed one can find direct connections of them to all these fields, why we have "Just Six Numbers [of] The Deep Forces That Shape the Unvierse", and why the concept of the multiverse is not so esoteric.
    Best of all is that a fractal is absolutely rational.
    Subjectivity can be or even is an objective number or rational number.

    This all leads us back to the start of the discussion about the various spectra.

    In this situation the problem is to get from one location in the fractal to another location in the fractal by finding another (location in the) fractal for the translation, transformation, or morphogenesis. And at this point complexity hammers in, because now we are in the fields of

  • Computational Complexity Theory (CCT), including the problem of the class of algorithms, which run with deterministic Polynomial runtime (P) versus the class of algorithms, which run with Non-deterministic Polynomial time (NP) (P vs. NP problem),
  • Algorithmic Information Theory (AIT),
  • and so on,
  • which also leads us back to deterministic chaos.

    We are not aware of any other works of art and oeuvre like this, specifically our Evoos and our OS. Indeed, we have a lot of works, that discuss parts or excerpts of the big picture and therefore are referenced by us, but there is nothing existing since ever, which integrates all in one, into one absolutely sound, homogeneous, and consistent Ontologics, Theory of Everything (ToE) with a Caliber/Calibre, Reality operating system, fusion of realities or New Reality, Ontoverse, and so on.


    13.June.2022

    Style of Speed Further steps

    As mentioned in earlier Further steps, we have some design directions for our 91x project, as shown in the two collages below.

    The first collage shows designs of the front and side sections (from left to right and top to bottom):
    1. Porsche, Marco Brunori, 911 Pounds (inspired by SoS 9EE series), 991 GT3 RSR 2017, and Mission R rear wing, and Tom Harezlak, 903
    2. Porsche, Marco Brunori, 911 Pounds (inspired by SoS 9EE series), 991 GT3 RSR 2017, and Vision 916 (inspired by SoS E-Conversion series)
    3. Edward Tseng, GT
    4. Alan Derosier, 931 Slant Nose Jägermeister (inspired by SoS 9x9 and 91x projects)
    5. Yann Jarsalle, 917
    6. Tom Harezlak, 903
    7. Gilsung Park, Electric Le Mans 2035 (inspired by SoS Street Legal series, for example 962 ST with 2000 PS and other models)
    8. Tom Harezlak, 903

    Style of Speed 91x front and side sections Collage 2022
    © Listed companies, designers, and photographs, and Style of Speed

    The second collage shows designs of the rear section (from left to right and top to bottom):
    1. Porsche, Vision Gran Turismo 2021 (both alternative designs of the rear section of 911 Pounds project, one design of rear section of 911 Pounds was also shown before with Mission R, because of SoS 9x9 RSR and 91x projects)
    2. Porsche, Marco Brunori, 911 Next Generation
    3. Radek Stepan, Hover Porsche (inspired by SoS Speeder series)
    4. Porsche, Marco Brunori, 911 Next Generation
    5. Volkswagen, Min Byungyoon, 911 100 (lowest largest image)
    6. Geely and Etika Automotive, Lotus, Evija (inspired by Style of Speed Street Legal, for example 962 ST with 2000 PS and other models)
    7. Porsche 919 Street (inspired by Style of Speed Street Legal series, for example 962 ST with 2000 PS and other models)
    8. Artem Popkov, 911 with mid-engine layout (inspired by SoS 9x9 Modern project)
    9. Porsche, Marco Brunori, 911 Next Generation, 959 Hommage (inspired by SoS 9x9 935 Hommage project also copied by Porsche with 991 GT2 RS 935 Hommage)

    Style of Speed 91x rear sections Collage 2022
    © Listed companies, designers, and photographs, and Style of Speed

    As one can see, the holes in front of the rear wheels at the sides of our 91x have an aerodynamic function.

    But missing is one more thing, which is our next world's first.


    15.June.2022

    17:30 and 17:55 UTC+2
    Ontonics Further steps

    We will talk with the company Alphabet (Google) about its takeover and the related plan, like for example the one discussed in the Further steps of the 4th of November 2018.
    If there is an interest for realizing this act, then we can begin with the big calculation of the damages, royalties, evaluations, and set-offs, but we are not sure if the result will be positive or negative for Alphabet and therefore if a reasonable compensation is justified.

    In addition, the activities in relation to the undisclosed field and the company Amazon might have got another additional huge momentum by the very potential join up of another not so small undisclosed entity. And yes, it is about the Next Superbolt™ New ...™ Blitz™.

    We are also continously looking at other takeover candidates.

  • How about the takeover of Sony? C.S. would like to have an AIBO. :D
  • We highly recommend but would also appreciate that the company Coinbase comes to us.

    King Smiley Further steps

    Let us discuss the

  • New Mannahatta borough (see the Further steps of the 14th of January 2022),
  • Brooklyn-Queens Expressway (BQE) respectively Interstate 278, and
  • other interesting and proposed infrastructure projects

    in this little village.
    The rest of the world does not sleep, too. :)


    19.June.2022

    22:55 and 29:55 UTC+2
    OpenAI GPT and DALL-E are based on Evoos and OS
    might become Clarification or Investigations::AI and KM

    *** Revision - correction and better explanation ***
    The combination and integration of the fields of

  • Evolutionary Computing (EC) and
  • Computational Linguistics (CL) respectively Natural Language Processing (NLP) and Natural Language Understanding (NLU)

    is around 25 years old stuff, which works better and better by brute force respectively more power of computers.

    See for example the following works:

  • Harp, S.A., Samad, T., and Guha, A.: Toward the Genetic Synthesis of Neural Networks. 1989
  • Whitley, D., and Hanson, T.: Towards the genetic synthesis of neural networks. 1989
  • Miller, G.F., Todd, P.M., and Hedge, S.U.: Designing Neural Networks Using Genetic Algorithms. 1989
  • Kitano, H.: Designing Neural Network Using Genetic Algorithm with Graph Generation System. 1990
  • Boers, E.J.W. and Kuiper, H.: Biological metaphors and the design of modular artificial neural networks. 1992
    The authors propose a method based on biological metaphors to find automatically good Modular Artificial Neural Network (MANN) structures using a computer program. They argue that MANNs have a better performance than their non-modular counterparts and that the human brain can also be seen as a Modular Neural Network (MNN) and therefore propose a search method "based on the natural process, that resulted in the brain: Genetic Algorithms are used to imitate evolution, and L-systems are used to model the kind of recipes nature uses in biological growth".
  • Happel, B.L.M., and Murre, J.M.J.: Designing Modular Network Architectures Using a Genetic Algorithm. 1992
  • Gruau, F.: Neural Network Synthesis Using Cellular Encoding and the Genetic Algorithm. 4th of January 1994
    The author concludes that the application of Genetic Algorithm (GA) for the synthesis of Artificial Neural Networks (ANNs) using Cellular Encoding (CE) is Genetic Programming (GP) (GA + CE = GP). Consequently, the synthesis is called a Genetic Neural Network (GNN) and the synthesis method needs no learning.
    Specifically interesting and important is that CE
    • is based on modularity and Modular Artificial Neural Network (MANN) or simply Modular Neural Network (MNN) and
    • is a parallel graph grammar that checks a number of properties.
  • Angeline, P.J., Saunders, G.M., and Pollack, J.B.: An Evolutionary Algorithm that Constructs Recurrent Neural Networks. January 1994.
    The authors conclude that the application of Genetic Algorithm (GA) is inappropriate for network acquisition and applies Evolutionary Programming (EP) to construct a Recurrent Neural Network (RNN) directly with the GeNeralized Acquisition of Recurrent Links (GNARL), which simultaneously acquires both the structure and weights without an intermediate Cellular Encoding (CE) step.
  • Yao, X., and Liu, Y.: A New Evolutionary System for Evolving Artificial Neural Networks. 6th of January 1996 to 3rd of May 1997.
    The authors apply Evolutionary Programming (EP) to construct a Feedforward Neural Network (FNN) directly. Consequently, the approach is called EPNet.

    are possible.

    Our Evolutionary operating system (Evoos) was created for these technologies, applications, and services. -->

    We also quote a webpage, which is about the gas network or simply gas net in the context of Machine Learning (ML), Artificial Neural Network (ANN), and Evolutionary Computing (EC), was presented at the International Conference on Artificial Neural Networks in September 1998, and was publicated on the 3rd of October 1998: "Gas on the brain
    [...]
    [...] One of the chief routes towards this goal has been to increase the number of nodes and the richness of their interconnections. So how is it that researchers [...] have managed to create devices with the capability of large, complex neural nets that consist of just a handful of nodes and sometimes not a single interconnection?
    The answer, which is inspired once again by the workings of the human brain, lies in a virtual gas. The researchers' approach [...] opens the way for a new generation of powerful, lean computers, which they call "gas nets". The notion of gas nets is also giving neuroscientists a way to improve their simulations of the workings of the brain. "It represents a considerable step forward in understanding biological and artificial neural networks," [...].
    Though neural computers are based on the brain, they are actually pretty imperfect models - and not just because they have such paltry numbers of nodes and interconnections. A brain cell fires off an electrical impulse when the sum of the signals it receives from other neurons reaches a certain threshold. This much is copied by neural networks. A node carries out a mathematical procedure - which may be simple addition or something more complicated - on the inputs it receives from other nodes. If the result is above a certain threshold, then it fires an output.
    In the brain, neurons are separated by gaps, called synapses, and communication across these tiny chasms is carried out by chemical messengers, called neurotransmitters. So an impulse from one neuron must first be converted into a neurotransmitter, which is then converted back to electricity by the receiving neuron. To complicate the picture, different synapses have different effects on the receiving neuron - some may stop it firing, for example - and the effects change over time.
    To mimic these effects, the wires between nodes in neural computers carry a variable weighting: each one may increase or attenuate the signal it carries. This is the key to how neural computers "learn". A network is "trained" to recognise different patterns of input signals by changing the weights and firing thresholds of the nodes until it produces the required output.
    However, computer scientists have largely ignored synaptic chemistry. As a result, neural computers miss out many subtle effects that take place in the brain. Sometimes, for example, a neurotransmitter released at one synapse can change the way the receiving neuron responds to signals arriving at its other synapses - either boosting or blunting them.
    Nor are all these "neuromodulatory" effects confined to interconnected neurons. A decade ago, brain researchers were surprised to find that a neurotransmitter could spread its modulatory message to distant neurons. [...] That chemical is nitric oxide (NO).
    Because NO is so much smaller than other neurotransmitters, it can pass unhindered through cell membranes. And when the gas meets a neuron with a NO receptor, it can raise the amount of neurotransmitter released by that neuron in response to an electrical impulse. In effect it amplifies the neuron's influence on the cells it feeds into. The discovery of NO's long-range abilities demolished the notion that neurons communicate only via synapses and only with their neighbours. It also showed that an artificial neural network with nodes connected by wires alone was not just an imperfect model of the brain, but a pale shadow of it.

    Whiff of gas
    The work [...] brings neural computing closer to current thinking in neuroscience, by adding a virtual equivalent of NO. It is taking place at the Centre for Computational Neuroscience and Robotics, a unit set up in 1996 to encourage neuroscience and computing researchers to talk to one another. It was here that [a] neuroscientist [...] told [...] a specialist in evolutionary robotics, about having gas on the brain. "I didn't know about NO at all until about a year ago," says [a specialist in evolutionary robotics]. "It struck me immediately that it was interesting from a control engineering point of view. I saw that gases could modulate the network without changing the wires."
    Together with his colleague[s a specialist in evolutionary robotics] has developed methods for creating software simulations of neural networks by harnessing the power of evolution. His networks act as controllers for robots, allowing them to perform simple tasks [...]. To start with, [a specialist in evolutionary robotics] used conventional neural networks. But after talking to [a neuroscientist] he decided to add a whiff of gas to see what would happen.
    To create one of his controllers, [a specialist in evolutionary robotics] uses a genetic algorithm which treats the features of a network as though they are genes to be passed from one generation to the next. The number of nodes, the patterns of wiring between them, the weightings applied to those wires and the firing thresholds of the nodes are all thrown into the genetic mixer. For robots using a camera to see, [a specialist in evolutionary robotics] also allows the algorithm to choose any number of pixels from the camera image and how they connect to the nodes.
    Next, a computer generates 100 different networks, all with randomly chosen values for the features. Each network is tested to see how well it performs, using a computer simulation of the [given] problem [task]. Poorly performing networks are thrown out, but the better networks are allowed to reproduce by swapping a gene - a feature's value-here and there. The values assigned to a feature can also change at random, mimicking the mutations that happen in nature. The new networks created in this way are then tested once more and the whole cycle repeated. Successive generations yield networks that do a better and better job of guiding the robot to its goal, until eventually an optimal solution emerges.
    The random nature of the evolutionary process means that the genetic algorithm does not converge on the same "best" network every time it runs. [A specialist in evolutionary robotics] hoped that adding gas would increase the number of ways that networks could evolve, and perhaps generate simpler solutions. But there was a problem.
    "There isn't any space or time in conventional neural networks," says [a specialist in evolutionary robotics]. For virtual NO [gas] to have any effect, the positions of all the nodes would have to be known, together with some way to describe how the gas diffuses over time. To keep things simple, [a specialist in evolutionary robotics] and his colleagues limited the nodes to a flat surface, rather than three-dimensional space, and used a fairly crude description of how the gas would diffuse in a growing circle.
    So, the genetic algorithm for the gas net has to take into account the positions of nodes, the "firing threshold" at which a node will emit the gas, the speed of diffusion of the gas and whether the receiving neurons became more or less sensitive to incoming signals. All this on top of the features of a regular neural net.
    Working with [a ...] student, [a specialist in evolutionary robotics] decided to repeat a series of tasks previously tackled by [a mathematician]. Using genetic algorithms, [a mathematician] had consistently evolved conventional neural networks for the [given problem] task after about 6000 generations. A typical successful network used 46 nodes, well over 100 wires and eight pixels from the camera's output.
    By comparison, [a specialist in evolutionary robotics] and [a student] found that a gas net capable of guiding a robot to a [goal] rarely needed more than 1000 generations, and in some cases they emerged after only a couple of hundred. The gas nets were also far simpler than [a mathematician]'s. A typical gas net used between 5 and 15 nodes and only two or three pixels. Even more remarkable, the nodes of the gas nets were connected by hardly any wires: they influenced one another mostly via the virtual gas.
    "This demonstrates the power provided by having two distinct yet interacting processes at play. Signals are flowing down the wires connecting the nodes at the same time as the gas modulates the properties of the nodes," says [a specialist in evolutionary robotics]. "Structurally simple yet dynamically sophisticated networks could be really useful in, for example, space missions, where you need minimalist systems."

    Route finder
    [...]
    [...] One curious aspect of this experiment is that some of the final gas nets operated without any gas at all, although all of them used gas during their evolution. This suggests the gas played a key part in the learning process of the networks.
    [A neuroscientist] finds this particularly interesting. He and other neuroscientists suspect that NO [gas] has an important role in learning and memory in the brain. "One thing that happens as a result of learning is that the structure of the nervous system changes," says [a neuroscientist]. If a synapse is used frequently, the amount of neurotransmitter that crosses it increases, so the transmitting neuron has a greater effect on the receiving cell. This synaptic "strengthening" leaves a long-term memory of prior brain activity. But how does it happen? As NO [gas] can diffuse backwards [and hence bidirectional] across synapses from the receiving to the transmitting neuron, it looks like a strong candidate.
    [...]
    But what of the strategy of building ever bigger networks with more and more complicated wiring? This is the approach, for example, of Hugo de Garis [...]. [A specialist in evolutionary robotics] is sceptical of this approach. "Often by adding more connections you tend to screw up something that is just starting to work," he says. "Gas effects aren't permanent. They're only active when necessary. It's a gentler kind of effect."
    Gas makes neurons more plastic, says [a neuroscientist]. "For any connectivity pattern you can have a number of different behaviours depending on the way it is modulated by gas," he says. "Neurons in the circuit are different at different times."
    It is too early to tell where NO [gas] will eventually lead. "This may send neural network research off in a new direction," says [a specialist in evolutionary robotics]. His next goal is to develop gas-net robots with more complex behaviour than any robot controlled by a neural net. "If you want to build machines with anything like significant levels of intelligence, we think that wires and nodes will not be enough," he says. "You'll also need an artificial pharmacology."

    Comment
    Our Evolutionary operating system (Evoos) was created for these technologies, applications, and services.

    A similar approach respectively a rudimentary simulation of gas networks is the attention mechanism, specifically the cross-attention respectively encoder-decoder attention mechanism between an encoder and a decoder of a transducer, such as a transformer, reformer, and perceiver model in the field of ML, and the cross-attention between a byte array and a latent array to another latent array of a general transducer, such as a perceiver.

    We can also use our Wireless Supercomputer (WiSer), as said in the Clarification of the 28th of April 2016.

    We also quote an online encyclopedia about the subject transduction or transductive inference in the context of Machine Learning (ML): "In logic, statistical inference, and supervised learning, transduction or transductive inference is reasoning from observed, specific (training) cases to specific (test) cases. In contrast, induction is reasoning from observed training cases to general rules, which are then applied to the test cases. The distinction is most interesting in cases where the predictions of the transductive model are not achievable by any inductive model. Note that this is caused by transductive inference on different test sets producing mutually inconsistent predictions.
    Transduction was introduced by Vladimir Vapnik in the 1990s, motivated by his view that transduction is preferable to induction since, according to him, induction requires solving a more general problem (inferring a function) before solving a more specific problem (computing outputs for new cases): "When solving a problem of interest, do not solve a more general problem as an intermediate step. Try to get the answer that you really need but not a more general one."[1] A similar observation had been made earlier by Bertrand Russell: "we shall reach the conclusion that Socrates is mortal with a greater approach to certainty if we make our argument purely inductive than if we go by way of 'all men are mortal' and then use deduction" (Russell 1912, chap VII).
    An example of learning which is not inductive would be in the case of binary classification, where the inputs tend to cluster in two groups. A large set of test inputs may help in finding the clusters, thus providing useful information about the classification labels. The same predictions would not be obtainable from a model which induces a function based only on the training cases. Some people may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. An example of an algorithm in this category is the Transductive Support Vector Machine (TSVM).
    A third possible motivation which leads to transduction arises through the need to approximate. If exact inference is computationally prohibitive, one may at least try to make sure that the approximations are good at the test inputs. In this case, the test inputs could come from an arbitrary distribution (not necessarily related to the distribution of the training inputs), which wouldn't be allowed in semi-supervised learning. [...]"

    Transducers include transformer, reformer, and preceiver models, which are based on the encoder-decoder architecture.

    We also quote an online encyclopedia about the subject Self-Organizing Map (SOM): "A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
    An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network.[1][2 [Self-Organized Formation of Topologically Correct Feature Maps". Biological Cybernetics. [1982]]] The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s[3 [Self-organization of orientation sensitive cells in the striate cortex. Kybernetik. [1973]]] and morphogenesis models dating back to Alan Turing in the 1950s.[4 [The chemical basis of morphogenesis. [1952]]]"

    The document titled "Symbol Grounding Transfer with Hybrid Self-Organizing/Supervised Neural Networks" was publicated in the year 2004 and is based on

  • Self-Organizing Map (SOM), which is a type of Artificial Neural Network (ANN) and used for the unsupervised learning, and
  • Multi-Layer Perceptron (MLP) network, which is used for the supervised learning in particular and also for the attention mechanism in the field of Machine Learning (ML) in general, which again
    • is based on cognitive attention on the one hand and
    • includes self-attention on the other hand, and also
    • includes Query-Key-Value (QKV) attention, which applies query, key, and value networks, which again are typically for a Multi-Layer Perceptron (MLP),

    German version since 3rd of November 2020; English version since 9th of June 2021 -->

    The word2vec technique for NLP translates words into numbers respectively produces word embedding vectors by using ANN and retaining syntactic and semantic proximities respectively associations of words.
    The technique has been developed further in variants, which produce contextual word embbeding vectors and contextual string embbeding with context2vec and for example by retaining partial language modelling prediction and next sentence prediction.
    See also the Clarification of the 14th of May 2016 and 8th of July 2016.

    Our Evoos is based on cybernetics, bionics, self-organization, Artificial Life (AL), ML, ANN, multi-layer and meta-layer structure or architecture, was created for these technologies, applications, and services, or even created said integrated subfield of SB.
    See also the so-called transformer, reformer, and perceiver models in the field of ML.

    Our Evoos was created further with our Ontologic System (OS) with its Ontologic System Architecture (OSA), which integrates for example

  • EC, and Smodels and Generate 'n' Test (GnT),
  • Smodels and Generate 'n' Test (GnT), and CL resp. NLP and NLU, and
  • EC, Smodels and Generate 'n' Test (GnT), and CL resp. NLP and NLU, and also
  • 2D and 3D
    • drawing,
    • painting,
    • modelling,
    • rendering, and
    • raytracing.

    Our Evoos also describes more technologies (see for example the related note Google NTM, DNC, Transformer, BERT, and Perceiver are based on Evoos and OS of today).

    Generative Pre-trained Transformer (GPT) is based on the

  • Transformer model, which again is based on the
    • word embedding technique,
    • attention mechanism, and
    • Self-Supervised Learning (SSL or SelfSL),

    and one of the variants

  • EC, and CL resp. NLP and NLU,
  • GnT, and CL resp. NLP and NLU, or
  • EC, GnT, and CL resp. NLP and NLU,

    and therefore it is based on either

  • Evoos or
  • OS.

    But with 2D and 3D, as implemented with the program DALL-E, which is a multimodal implementation of GPT-3 and generates images from textual descriptions, we are even deeper in the legal scope of ... the Ontoverse (Ov), aka. OntoLand (OL).
    See for example the

  • Ontologic System Components (OSC)

    and the

  • sections

    of the webpage Links to Software of the website of OntoLinux).

    Implementing respectively reproducing and publicating respectively performing our copyrighted works of art respectively properties in whole or in part as Free and Open Source Software (FOSS) and Free and Open Source Hardware (FOSH) will have serious consequences for all responsible entities.

    We already said that sooner or later Elon Musk will end in jail for many years and some other very well known persons should also focus on cleaining up the whole mess before we will do in the not so far away future.

    By the way:

  • SOPR, AoA and ToS, triple damages, licenses, infrastructures, blacklisting, LM royalties up to 20%, MCM, etc. need no discussion anymore.


    20.June.2022

    03:35 and 05:55 UTC+2
    Google NTM, DNC, Transformer, BERT, and Perceiver are based on Evoos and OS

    *** Revision - better explanation somehow ***
    Google has implemented respectively reproduced and publicated respectively performed essential parts of our copyrighted works of art titled

  • Evolutionary operating system, also known as Evoos, and
  • Ontologic System, also known as OS,

    and created by C.S. for the so-called

  • Neural Turing Machine (NTM),
  • Differentiable Neural Computer (DNC) (NTM with memory and cognitive attention, including self-attention, based on reflection respectively where and when respectively space and time functionality),
  • Transformer Machine Learning (ML) model, which is based on transduction respectively self-supervised learning involving unsupervised pretraining followed by supervised fine-tuning, and also self-attention , and
  • Bidirectional Encoder Representations from Transformers (BERT), which is a Machine Learning (ML) technique for Natural Language Processing (NLP), or better said Natural MultiLingual Processing (NMLP) pre-training based on the Transformer model with multilingual bidirectional encoder and special decoder , and
  • Perceiver and Perceiver IO for structured input and output, which are multimodal respectively general transformers without modality specialization, which again are based on Multi-Layer Perceptron (MLP) nets or (fully connected) Feedforward Artificial Neural Network (FANN) or simply Feedforward Neural Network (FNN) respectively the related part of the OntoBot component based on our Ontologic Computing (OC) paradigm.
    It seems to be that Alphabet is still learning and taking our Evoos and OS as source of inspiration and blueprint.

    See also for example the

  • Clarification of the 28th of April 2016 (keywords multi-layer perceptron and (MLP)), and
  • comment to the section about MXNet of the Investigations::AI and KM of the 14th of January 2018 (keywords multilayer perceptron and (MLP)), and also
  • Clarification of the 8th of May 2022,

    which discusses the fields of

  • cybernetics,
  • bionics,
  • self-organization,
  • parallelism and concurrency,
  • interactivity, Interactive Turing Machine (ITM) respectively Turing Machine with interactivity, and Interaction Machine (IM) or Turing Machine with Input and Output (TMIO),
  • reflective distributed Multi-Agent System (MAS) and Holonic Agent System (HAS),
  • proemiality and polycontexturality,
  • fractal structure and holonic structure,
  • translation, transformation and metamorphosis mechanisms,
  • and other topics

    in relation to our Evoos and OS, and also all these performances and reproductions of them.

    05:55 UTC+2
    Success story continues and no end in sight

    *** Revision - better explanation somehow ***
    We got a lot more evidences that we have revolutionized everything with our original and unique work of arts titled

  • Evolutionary operating system, also known as Evoos, and
  • Ontologic System, also known as OS,

    and created by C.S., as can be easily seen with companies and their technologies, goods, and services based on our fields of HardBionics (HB) and SoftBionics (SB), such as for example

  • Alphabet→Google and → DeepMind
    • Neural Turing Machine (NTM),
    • Differentiable Neural Computer (DNC),
    • Transformer Machine Learning (ML) model,
    • Bidirectional Encoder Representations from Transformers (BERT), and
    • Perceiver transformer,
  • OpenAI
    • Generative Pre-trained Transformer (GPT) and
    • DALL-E image generator,
  • Microsoft
    • Turing Natural Language Generation (T-NLG),
    • GPT-3 Application Programming Interface (API), and
    • other technologies, goods, and services,
  • and so on.

    But we must caution about these brute force approaches, because they are not validated and verified, and thererfore not trustworthy, and validating and verifying them is a totally different and much more expensive task, if they are validateable and verifiable at all.
    In fact, they are like children, who have got a matchbox and are now playing with fire, and it already jumps out of the rails.

    By the way:

  • We only except 100% takeover or 100% Society for Ontological Performance and Reproduction (SOPR) and the Articles of Association (AoA) and the Terms of Services (ToS) with the License Model (LM) and Main Contract Model (MCM) of our SOPR, etc..


    21.June.2022

    01:00 and 11:00 UTC+2
    Summary of website revision

    We have added to the note OpenAI GPT and DALL-E are based on Evoos and OS of the 19th of June 2022 corrections and quotes of

  • "Building Brains for Bodies", August 1993,
  • gas net,
  • Multi-Layer Perceptron (MLP),
  • transduction or transductive inference,
  • attention mechanism,
  • Self-Organizing Map (SOM),
  • "Symbol Grounding Transfer with Hybrid Self-Organizing/Supervised Neural Networks", 2004,
    and
  • word2vec technique

    to give more informations and explanations in relation to the so-called transformer, reformer, and perceiver.

    The notes

  • OpenAI GPT and DALL-E are based on Evoos and OS of the 19th of June 2022
  • Google NTM, DNC, Transformer, BERT, and Perceiver are based on Evoos and OS of the 20th of June 2022, and
  • Success story continues and no end in sight of the 20th of June 2022

    might become a clarification or be moved to the one or more related clarifications.

    While looking at the matter, we had the impression that Self-Supervised Learning (SSL) is presented as of being quite new.
    Indeed, we noted that the

  • document titled "Attention Is All You Need" about the so-called Transformer ML model was publicated in the year 2017 after the other attempt to steal our properties with "Neural Turing Machine (NTM)" in 2014 and "Dfferentiable Neural Computer (DNC)" in 2016, both based on Long Short Term Memory (LSTM), which showed a hard break regarding the topics, and change of interest, which again was also reflected at least by the company Microsoft with OpenAI Generative Pre-trained Transformer (GPT) 3 after the other attempt with Turing Natural Language Generation (T-NLG), and
  • webpage of an online encyclopedia about the subject SSL only references relatively new works and that its German version was created on the 3rd of November 2020 and its English version is a translation of the German version created on the 9th of June 2021.

    A further research after the origin of SSL provided two sources

  • Meta (Facebook), et al.: Wav2vec: State-of-the-art speech recognition through self-supervision. 19th of September 2019.
    which means Natural Sound Processing (NSP),
  • Alphabet (Google)→DeepMind, et al.: Representation learning with contrastive predictive coding. 22nd of January 2019.
    for unsupervised learning, and
  • Investigations::AI and Knowledge management, and Robotics of the 24th of September 2010
    for self-supervised learning. Note that Long-Short-Term Memory (LSTM) based on Recurent Neural Network (RNN) was also developed at the Technical University Munich which is the reason why Alphatbet (Google), Apple, Amazon, and Co. collaborated with it and not with us. So we already have the next stealing of our property in this field, because those entities do not know what to steal before we showed what we do.)

    A further research of

  • Alphabet (Google)→Google Research, Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton: A Simple Framework for Contrastive Learning of Visual Representations. 1st of July 2020.

    gave

  • Becker, S., Hinton, G.E.: Self-organizing neural network that discovers surfaces in random-dot stereograms. 1992.
    Note that
    • Self-Organizing Map (SOM) is also based on Artificial Neural Network (ANN) and
    • Geoffrey Hinton is Hinton, G.E..

    But a SOM only belongs to the Unsupervised Learning (UL or USL) part of SSL and prior art is based on a hybrid and modular connectionist model.

  • Schyns, P.G.: A modular neural network model of concept acquisition. 1991.
    The work is based on a hybrid and modular connectionist model consisting of an unsupervised Self-Organizing Map (SOM) and a supervised Multi-Layer Perceptron (MLP) for categorizing and naming, and references
    • Hinton, G.E., Becker, S.: An unsupervised learning procedure that discovers surfaces in random-dot stereograms. 1990.
  • Happel, B.L.M., Murre, J.M.J.: The Design and Evolution of Modular Neural Network Architectures. 1994.
    "The simulations rely on a particular network module called the categorizing and learning module. This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole."

    This implies that SSL was created with

  • Analysis and Design of an Operating System According to Evolutionary and Genetic Aspects, also called Evolutionary operating system (Evoos). 1999.
    Our Evoos adds meta-layer architecture, reflection, interactivity, multimodality, parallelism and concurrency, proemiality, polycontexturality, metamorphism, self-organization, Artificial Life (AL), Machine Learning (ML), Artificial Neural Network (ANN), Multi-Layer Perceptron (MLP), Holonic Agent System (HAS), Multi-Agent System (MAS), connectionism, ontology, and so on.

  • Riga, T., Cangelosi A., Greco, A.: Symbol Grounding Transfer with Hybrid Self-Organizing/Supervised Neural Networks. 2004.
    A. Cangelosi also wrote at least 2 documents together with S. Harnad, who again is the author of the document titled "The Symbol Grounding Problem" from 1990.
    "Schyns calls it "mapped functional modularity". His model contains an unsupervised module that categorises the stimulus set, while a supervised module connects labels to their representations. [...] However, Schyns's model is limited to the direct grounding of basic category names. No names of higher-order categories are learned via symbolic instructions, and therefore the grounding transfer mechanism does not apply. Instead, he concentrates on prototype effects and conceptual nesting of hierarchical category structures. Symbols are only used as indicators of knowledge and facilitators of concept extraction. [...] the present work builds on Schyns's [...]."
  • Stavens, D., Thrun, S.: A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous Driving. In: Proceedings of the Conference on Uncertainty in AI (UAI). 13th - 16th of July 2006.
  • Dahlkamp, H., Kaehler, A., Stavens, D., Thrun, S., Bradski, G.: Self-Supervised Monocular Road Detection in Desert Terrain. In: Sukhatme, G., Schaal, S., Burgard, W., Fox, D.: Proceedings of the Robotics Science and Systems Conference. 16th - 19th of August 2006.
    The work references the document titled "A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous Driving".
  • OS. 29th of October 2006.
  • Geoffrey E.H., Simon O., Yee-Whye T.: A fast learning algorithm for deep belief nets. 2006.

    Alphabet (Google)→Google Brain and Google Research and University of Toronto, et al.: Attention Is All You Need.

    The work is about the so-called Transformer and cites 2 works about self-training:

    • McClosky, D., Charniak, E., Johnson, M.: Effective self-training for parsing. June 2006.
    • Huang, Z., Harper, M.: Self-training PCFG grammars with latent annotations across languages. August 2009.

    11:04 UTC+2
    Facebook Web2vec is based on Evoos and OS


    24.June.2022

    08:28 UTC+2
    Short summary of clarification

    We have continued the work related to the Clarification of the 8th of May 2022.

    We quote an online encyclopedia about the subject homoiconicity: "In computer programming, homoiconicity (from the Greek words homo- meaning "the same" and icon meaning "representation") is a property of some programming languages. A language is homoiconic if a program written in it can be manipulated as data using the language, and thus the program's internal representation can be inferred just by reading the program itself. This property is often summarized by saying that the language treats "code as data".
    In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself. This makes metaprogramming easier than in a language without this property: reflection in the language (examining the program's entities at runtime) depends on a single, homogeneous structure, and it does not have to handle several different structures that would appear in a complex syntax. Homoiconic languages typically include full support of syntactic macros, allowing the programmer to express transformations of programs in a concise way.
    A commonly cited example is Lisp, which was created to allow for easy list manipulations and where the structure is given by S-expressions that take the form of nested lists, and can be manipulated by other Lisp code.[1] Other examples are the programming languages Clojure (a contemporary dialect of Lisp), Rebol (also its successor Red), Refal, Prolog, and more recently Julia.

    [...]

    Uses and advantages
    One advantage of homoiconicity is that extending the language with new concepts typically becomes simpler, as data representing code can be passed between the meta and base layer of the program. The abstract syntax tree of a function may be composed and manipulated as a data structure in the meta layer, and then evaluated. It can be much easier to understand how to manipulate the code since it can be more easily understood as simple data (since the format of the language itself is as a data format).
    A typical demonstration of homoiconicity is the meta-circular evaluator. [Meta-circular evaluator "The term itself was coined by John C. Reynolds,[1] popularized through its use in the book Structure and Interpretation of Computer Programs.[2][6]"

    We looked once again at the Robinson diagram, because we only found

  • Wilfrid Hodges: A Shorter Model Theory. 1998.

    and something related to logics, models, and formal languages, specifically first-order languages, but nothing pictoral, specifcally "diagrams in the sense of pictures with arrows, as in category theory".
    Eventually, it turned out that we simply have not connected the term Robinson diagram with the set of all closed literals of the signature L(sequence c of distinct new constant symbols) of an L-structure A, which are true in (A,a) in relation to the

  • field of model theory, and also
  • homomorphism,
  • embedding,
  • and so on.

    Even more interesting, we found out the following in

  • Talia Leven: Robinson's diagram as a tool for dealing with Skolem's criticism of formal language. 11th of November 2021:

    "Introduction
    According to [an online encyclopedia], a diagram is a symbolic representation of information, intended to convey essential meaning using visualization techniques. Although the word 'diagram' may suggest a picture, there was nothing pictorial about Robinson's use of this term in model theory. Nonetheless, Robinson's diagram is a symbolic representation of information. [...]

    ^3 In 1915, Leopold Löwenheim proved that if a first-order sentence has a model, then it has a model whose domain is countable. In 1922, Thoralf Skolem generalized this result to whole sets of sentences. He proved that if a countable collection of first-order sentences has an infinite model, then it has a model whose domain is only countable. This is the result which typically goes under the name of the Löwenheim-Skolem Theorem.
    ^4 If a countable first-order theory has an infinite model, then for every infinite cardinal number k it has a model of size k, and no first-order theory with an infinite model can have a unique model up to isomorphism.
    ^5 Skolem showed the weakness of formal language by means of a suitable construction of proper extensions of the system of natural numbers PA. This extension has the properties of natural numbers to the extent that these properties cannot be expressed in the lower predicate calculus in terms of quality, addition, and multiplication. These extensions of natural numbers are called 'the nonstandard models of arithmetic'. In addition, the Löwenheim- Skolem theorem showed that a collection of axioms cannot determine the size of a model: Every collection of axioms having an infinite model also has models of every infinite cardinal. An example of a nonstandard model of arithmetic is:
    ....... 1,2,3,4.... 1,2,3,4.... 1,2,3,4.... 1,2,3,4....

    [...] Robinson's philosophical point of view, which linked epistemology, formal language and existence.^6 Using formal language and logic as tools, together with the philosophical position that links semantics and syntactic[s] on one hand and epistemology, formal language, and existence on the other, will enable us to preserve the certainty of the classical notion of truth and reference without postulating non-natural mental powers.^7

    The empirical perspective
    [...]

    Diagrams as an intersection of semantics and syntax
    [...]

    ^6 Meaning the intended model will be the elementary sub-model of all the nonstandard models.
    ^7 Since Robinson was concerned with objectivity and therefore in objective concepts, he was very interested in methods for completing formal systems and defining tests for verifying their completeness.
    [...]
    8 Of course, there is no technical impediment to defining these enormous languages. But model theory in this context is regarded as merely a branch of pure mathematics, and therefore there is no real reason to worry about any of this.

    Diagrams as a tool for pointing at objects
    [...]

    Model complete
    [...]

    Prime model
    [...]

    The prime model is unique.
    [...]

    Diagram, persistence, prime model M0 and the transfer principle
    [...]

    Summary And Conclusions.
    This paper presents a possible way to address Skolem's criticism of formal languages using Robinson's tools, taken from model theory, such as diagram, model complete and prime model. The existence of different models that are not equivalent even to a complete formal system K is very disturbing, because the immediate consequence is that it is not possible to uniquely describe what a natural number is using the formal language L.
    Robinson believed that symbols in the formal system have a meaning that we cannot avoid. As he regarded semantics to be a part of mathematics, it was therefore possible and important for him to unite semantics and syntax into a single formal system.
    Robinson called this formal system 'a diagram'. Robinson thought of a diagram as a link between a formal system and its model. When a set K of axioms is complete, then K together with its diagram create a syntactic reflection of this model. According to Robinson, sometimes there is no distinction between syntax and semantics, since one may even assume that the relations and constants of the structure belong to the language and denote themselves (Robinson 1956, [Complete Theories,] 6).
    [...]
    Robinson's diagram has rightfully earned the title 'diagram' since it symbolically represents the syntactic as well as the semantic information of a complete set of axioms K and its intended model M.
    Robinson believed that one of the goals of mathematics should be a deeper understanding of its concepts. Perhaps a more profound comprehension of these notions will eventually lead to advancement in the philosophical understanding of logic and mathematics, concepts which in recent years have been overshadowed by technical achievements.
    According to Robinson, logic serves as wings to mathematics, allowing it to fly. (Robinson, 1964a, [Between Logics and Mathematics,] 220). I hope that the discussion presented here regarding Skolem's critique of formal languages is an example of this saying."

    Comment
    Robinson also did what Güther did in relation to symbols and languages, and their foundations and interpretations. But they took different directions and developed different approaches with the

  • formal system called Robinson diagram, and
  • kenogrammatics and kenogram, proemiality and proemial relationship, and polycontexturality.

    We also looked at the mailing list archive of the TUNES project and got the following facts about the author and the Arrow System

  • From the various informations given in the emails we found out that he is 21 years old and an enlisted technician in the U.S. Navy, on the U.S.American aircraft carrier USS Carl Vinson, traveling over the Pacific to the Gulf of ((?)Persia), and having not much time due to the work (on board) at that time. He has no profound or sufficient experience in programming of hardware.
  • 8 Oct 1998 He talked with a member of the TUNES project since the 3rd of October 1998 and finally entered the TUNES project and its mailing list.
    He claims to be "a self-taught programmer and amateur mathematician who has been using your TUNES project's documentation to help with my intense research for the last 2 years. [...] I can say, after about three years, that I have fully covered the sites and issues that your TUNES documentation and reviews mention."
  • 22nd of October 1998 "I'm sure everyone here is familiar with my idea to develop a core of reflective logical theories (call it an AI, call it a reflective object system, call it a rose, it still smells the same) on top of a program written in Ansi-C in a static way (much like an embryo which is not self-sustaining)."
  • 24th of October 1998 "Indeed, my project will resemble yours ([another project member]'s) in some ways. However, I already have plans to turn it into a proto-Tunes system, with horrible efficiency at first, which will allow us to have a persistent system to reason about in terms of its semantical problems as well as its efficiency problems. For instance, I'm interested to know what kind and how large the minimal object system will be which can reason about itself (on the VM) in a computationally-complete way. Keep in mind that my objects are not computational objects in the conventional sense, but mathematical (logical?) objects within a temporal context (an inadequate description, I know, but perhaps you will get the gist of it). Everything at first will have to be an explicit object (defined as elements of main memory) to the VM. Their fields will not even directly be attributes, since those will be separate objects themselves. Other objects will not exist, at first. I believe that this kind of rigorous definition will give us the appropriate framework to start from. We should develop a VM which will represent in itself with an (eventually appropriate) structure of objects (remember that I mean to have each structure an individual object as well). I want to trace the operations of the system as it interacts with the user through a dialog interface and study the results to evaluate our design considerations for the VM. Eventually, the VM should be able to deal with objects not explicitly laid out in memory as separate atoms. Even the creation of new contexts should be done without changing the scheme for representing objects, until that is we have developed the ideas necessary to include this within the system's capabilities in a Tunes-friendly way.""

    This is not about Object-Orientation (OO) in the first place, but already about cybernetics, kenogrammatics, proemiality, and polycontexturality respectively subjectivity, as well as Model Theory (MT) and other fields, as it becomes obvious in later emails. But we note that he is even talking about matter related to complexity and Algorithmic Information Theory (AIT).

    1st of January 1999 "more specifics: at one point of view, the system will be a big persistent heap of Self-like objects, each with hyperlinks into other objects within. [...]"

    1st of January 1999 "btw.. hyperlink is just a generalized term for pointer. it's a reference that only needs the details required for completion of the reference from within the target, not by the subsystem.
    [...]
    do you want me to write code FOR you? the arrow language is so simple that it has already been described again and again by myself. the system is merely supposed to guarantee the consistency of the arrow system up to the point of user interface. the Vocabulary development should be relatively independent of the implementation from this respect, and Vocabulary is the chief benefit of such a system as this.
    [...]
    i can't describe the arrow language without thinking about how to make such a system from arrows instead of a regular computing language, so that the implementation details which i suggest are only for my thoughts about the point of total (bootstrap and OS level) reflection."

    2nd of January 1999 "i've already specified the arrow language! the only thing left is vocabulary! can't you understand that? it's not some arbitrary computer programming language where the concepts are opinion-based.
    in case you missed it, here is the language, everyone:
    arrows are abstract objects with N slots, the "default" being 2. iteration on the default arrow type yields multi-dimensional arrow types. each slot is a reference to an arrow. all arrows are available for reference.
    THAT'S IT! everything else is vocabulary which builds conceptual frameworks. if you are looking for more specifics on the definition of the arrow language, LOOK NO FURTHER!
    you people really are dense.
    this is the largest container for semantics ever devised! it's obviously very much bigger than you can imagine.
    witness a recent statement from the discussion: the way to achieve tunes is not to add code, but to take code away from a specification."

    In the document version 8 "A formal metaphor for specifying arrows should consist of viewing arrows as data structures with exactly two slots that are ordered". But this is the description of Güther's proemiality and polycontexturality, and together with Robinson diagram we get Güther's abstract object called kenogram and, as we explained, the proemial relationship can be drawn as 2 nodes interpreted as 0 and 1, and connected by 2 directed arrows respectively a pair of directed arrows with the one arrow going from 0 to 1 and the other arrow going from 1 to 0. The resulting graph shows at the same instant of time the simultaneous relationship of two slots, two signs, and relator 1 and relatum 0, and relator 0 and relatum 1.

    24th of April 1999 "Announcement:
    I posted Brian's Arrow paper draft on the TUNES web site. The link is http://www.tunes.org/papers/Arrow/ (also ftp://ftp.tunes.org/pub/tunes/papers/)
    For those who don't know, the Arrow System is Brian Rice's idea of a TUNES-like system. You can post comments about it to the tunes list."
    The date of the first publication of AS matches the internal date of the version 8 of the document 24th of April 1999.

    26th of April 1999 "Arrows n=m+1 example
    >'n', 'm', and '1' would be arrows (selectors) from sets of atoms:
    >'n' and 'm' would be part of a particular user context vocabulary
    >(called an ontology), and '1' would be an arrow from the set of
    >natural or integer or whatever kinds of numbers (again, in a graph).
    >this looks like a good place to start discussion... any comments?
    It is easy for me to imagine a '+' graph or a '=' graph because I know intuitively that the concepts behind these need to be linked (by arrows) to their arguments.
    But I have a problem to see 'n', 'm' and '1' as sets of arrows. Sure these are sets of 'things' but it is strange for me to make these things arrows. It seems to me that the arrow system must come to a point where it does not reference arrows of a graph, but simply 'things' of a set of 'things'.
    So it seems more intuitive to define an arrow as having two slots that can reference either an arrow or an 'atom (or object)'. I guess this must be wrong for you, since it would means to lose the homo-iconic property. But I don't see by myself yet why it would be so bad.
    This example have shown to me that graphs (set of arrows) are the way semantics get bound to arrows. I now tend to see the evaluation like an actor that add an arrow to the 'evaluate to' graph. By example an actor to add natural numbers, that do its job by looking inside the '+' graph for arrows whose both slots arrows are member of the 'natural numbers', and if so, add the two numbers and build an arrow with slot 0 referencing the arrow in the '+' graph, and slot 1 with a reference to the sum of the two numbers. This newly created arrow is then added to the 'evaluate to' graph.
    For me these numbers are just element of a set of objects. For you they are element of the set of 'natural numbers'. But then how does an arrow of this set became bound with the, let's say, five semantics.
    In my way of defining a slot of an arrow as either referencing an arrow or an object, I simply reference to a record that say where to find the object, what size it has and there I would find the sequence of bits 00000101."

    See the comment to the next quote.

    27th of April 1999 "once again, READ THE DRAFT! ontological relativism is the concept of going against the "levels of abstraction" metaphor and the HIERARCHY that it implies! i thought that tunes was against hierarchies because of the ideas of cybernetics and their universality. obviously, i was terribly wrong."

    Indeed, it is

  • Güther's Kenogrammatics, Proemial Relationship Model (PRM), and PolyContextural Logic (PCL),
    • formalized context, calculus of context, or Contextual Logic,
  • Robinson's Model Theory (MT), and
  • Arrow Logic (AL),

    here collectively referred to as ontological relativism.
    Instead of writing all the related chapters of that draft some few references to the original works would have been sufficient.

    As others and we already explained in relation to these fields, this idea of ontological relativism has a symbol grounding problem.

    27th of April 1999 "> I think we should all look at Cognitive Science research (and similar
    > sources) before we program our object system. I say it should be based on
    > _ordinary_ human thought constructs, not mathematics or computer science.
    >
    that's a great idea. why don't we program everything in english, since it's so natural? oh, wait. then we'd have to make an entirely different system for russians, or germans, or the eskimos, since their idea of human thoughts might be different from ours. the idea that _ordinary_ thought processes are enough for computer systems is as ludicrous as saying that because COBOL was closer to human language than any other of the early programming languages, that it was therefore the best among them.
    bottom line: it's against the ideas of utilitarianism (software re-use, etc.) to try to "force" anyone's idea of common sense representations into a computer."

    Thank you very much for clarification in relation to the field of Cognitive Computing (CogC), because the Arrow System and the TUNES OS are just only about ordinary computing, specifically graph processors and Graph-Based Knowledge Bases (GBKBs). Oh, Knowledge Graph (KG).

    27th of April 1999 "[...]
    ok. the "shifting" of contexts was supposed to relate to the papers by John McCarthy [and Saša Buvač] called "Formalizing Context (Expanded Notes)" (i lost the URL [www-formal.standford.edu], but i'm sure that the tunes site has it somewhere). so that related to interpreting information gained by one computation for the use of another, unrelated computation."

    So this explains the whole case more.
    John McCarthy: Notes on Formalizing Context. 1993.
    John McCarthy and Saša Buvač: Formalizing Context (Expanded Notes). 1998 and 28th of February 2012.

    27th of April 1999 "> I think we should all look at Cognitive Science research (and similar
    > sources) before we program our object system. I say it should be based on
    > _ordinary_ human thought constructs, not mathematics or computer science.
    >
    that's a great idea. why don't we program everything in english, since it's so natural? oh, wait. then we'd have to make an entirely different system for russians, or germans, or the eskimos, since their idea of human thoughts might be different from ours. the idea that _ordinary_ thought processes are enough for computer systems is as ludicrous as saying that because COBOL was closer to human language than any other of the early programming languages, that it was therefore the best among them.
    bottom line: it's against the ideas of utilitarianism (software re-use, etc.) to try to "force" anyone's idea of common sense representations into a computer.

    > Personally, the idea of a type/class system is pretty alien to me. My
    > world
    > consists of only 'objects'. Some 'objects' are very concrete: pen,
    > pencil,
    > keyboard, phone... Others are more vague (abstract): writing implement,
    > thing, idea, Tunes :), letter, song... All these 'objects' relate to one
    > another in different ways. Who needs types when we have _relations_? We
    > can say "a pencil _is_ a writing implement" or "a pen _is like_ a pencil"
    > or
    > "2 _is not_ a letter". As far as I know, that's how I represent things in
    > my head, and that's how other people do it too. Does anyone here do it
    > differently?
    >
    let's assume that we don't think differently, and that we make something upon which everyone can agree. have we accomplished the tunes goals? i say no. i say that tunes should be dynamically extensible by any user in a simple way _for any purpose_. no system today even remotely approaches this quality.
    > If different peoples' minds are somewhat incompatible, we should find out
    > how they're the alike, and make our object system flexible enough to
    > accomodate everyone. At the same time, it should accomodate computers, by
    > being fairly efficient... we might have to take shortcuts.
    >
    wow! i never thought of _that_ before! let's hack together some programming system / OS that just works. i'll bet that no one has tried that before. (intentional sarcasm)"

    28th of April 1999 "

    > [ snip ]
    > > ok. the "shifting" of contexts was supposed to relate to the papers by
    > John
    > > McCarthy called "Formalizing Context (Expanded Notes)" (i lost the URL,
    > but
    > > i'm sure that the tunes site has it somewhere). so that related to
    > > interpreting information gained by one computation for the use of
    > another,
    > > unrelated computation.
    > > the searching problem, as you state it, is another issue. i think that
    > the
    > > solution could be found by a simple idea: suppose we take some
    > data-format
    > > (a state-machine algebra) and define its information in terms of arrows.
    > it
    > > should be easy, then, to store most of the arrow system's information in
    > > terms of that data structure, based on the efficiency of the encoding
    > > (information density, search-and-retrieval times, etc). we could then,
    > for
    > > instance, store information in syntax trees (like LISP) and retrieve it
    > in
    > > the same way, constructing arrows for the information iteratively. this
    > > could even allow us to gain new information from, say, Lisp or Scheme
    > source
    > > code.
    >
    > I'm not sure I get it. You want to create a state-machine that, the same
    > way one can check the presence of e.g. a string, check the "presence" of
    > a valid system state? It sounds too simple, but it's an interesting
    > idea! Are you sure you wouldn't need to build a new machine for every
    > request?
    >
    actually, i'm sort of _interested_ in building a new machine at every request, but also in having the system be optimized for doing so. with arrows, their graphs can be used to build state-machines, possibly infinite in size (which means that abstract language models can be built). of course, the idea of having state-machines dynamically instantiated on request begs that we use partial evaluation and persistence to create these earlier, such as when we define the data-format to the system. then the question is where to place this information and how to re-use it effectively. > > > Now, where I'm getting at is: What kind of computational model do you > > > propose? ( that is: how will the system in practice process requests > > > like "what is n?" ?) It would be interesting, in the context of arrows > > > beeing a very general form of data organisation, to see what > complexity > > > it would have, and what compromizes ( if any ) and restrictions ( if > any > > > ) it would have. > > >
    > > in this case, i would generally propose lambda-calculus, which is
    > acheived
    > > quite readily by the arrow system when you make category diagrams.
    > category
    > > diagrams are just arrow graphs where all arrows compose sequentially.
    > the
    > > arrows represent lambdas, and the nodes represent expression types.
    > > otherwise, i believe that ontologies and algebras could help to define
    > any
    > > execution scheme that a person could imagine, even complicated ones.
    >
    > If I understand correctly, then arrows in their raw form must be
    > interpreted in some context. E.g. when you say above that
    > lambda-calculus can be achieved by regarding the arrows as those in a
    > category diagram. Other examples are object diagrams to model the state
    > of a system ( how the actual instances of things are connected together
    > ), and state diagrams to model the transitions between different states
    > in the system.
    >
    > I think this is where coloring of arrows, as June Kerby talks about,
    > comes in. In that scheme, you would say that a category diagram style
    > arrow is one color and a state transition style arrow is another. In the
    > general case, you would need one mother of a pallette. I guess you
    > wouldn't want coloring in your model, but still the problem of "what
    > does the arrow connecting these two enteties mean?" must be addressed in
    > some way.
    > There is however an alternative to coloring, that better fits the style
    > of your model, and that is contexts, saying that the color of the arrow
    > is dependent on the situation.
    >
    yes, and if you replace "color" with "meaning", then you'll find a somewhat complete answer in the arrow draft. (sections 2.2.2, 4.6, and 4.7, i believe)
    > But there are some problems with this that you might not want. First of
    > all the "situation" is dependent on the evaluation of something and then
    > you need context switches - but these context switches, beeing all
    > arrows, need some meta context switch in order to apply the proper
    > interpretation. And there will be _alot_ of context switching.
    >
    the arrow context switch could be contained by a single graph, or most likely a multitude of graph structures in order to provide a framework for reasoning.
    > The second problem ( I personally see this as a property and not a
    > problem, but I know Brian is against hierchies ) is that contexts,
    > always working on homoiconic arrows ( no coloring ), eventually end up
    > forming a hierchy. I.e. one context, formed by the area of switch-on to
    > switch-off, _must_ be contained fully inside or fully outside any other
    > context.
    > Imagine contexts 'S', for "legal state transition", and 'C', for "is in
    > the same category as". I'll denote a context with labled parenteses,
    > i.e. '(C' and 'C)'. Small letters are things to be interpreted. Now, if
    > I say:
    > (C a (S b C) c S)
    >
    > Then, given that the system is homoiconic, there is no-way to give any
    > meaning to 'b'. Choosing to view 'b' as talking about state transition,
    > gives you a 50% chance of failure - it's ambiguous.
    >
    yes, but then you're proposing the "push"/"pop" model anyway, which suggests a stack immediately. if you view it another way, then context-shift designators "(C" and "C)" enclose an ordered pair "(a,b)", which an arrow could represent. likewise, the state-transition designators "(S" and "S)" do the same for "(b,c)". by placing these graphs under the appropriate deterministic logic, meaning _can_ be derived, but it is really two completely separate meanings instead of a necessarily singular meaning. now, those two meanings _might_ be contradictory, but that would depend on the ontology: on how you decided to interpret what C and S meant in a given context. of course, if "b" is an atom in a context, then the interpretations of C and S should not be at the same order of abstraction in order to avoid the ambiguity.
    also, i'd like to relativize any concept, such as S="is a legal state transition". i'd like to place that in an environment (like Tunes) where generalizations can be readily made in a semantically clean way. my idea is that perhaps what passes for legal state transitions in one system means something completely different to another system or context. perhaps one acts as the machinery for the other, so that state transitions become parts of operators. or maybe the ontologies completely crosscut each other, so that it's hard to express verbally what the difference is.
    > If, on the other hand the system was not homoiconic, i.e. two different
    > types of arrows, then one can imagine that 'C' effected another type of
    > arrow then 'S'. That could work, but there would be inpureness or
    > sideeffects, and code would need to be veryfied at a level prior to
    > reflection - and you don't want that.
    >
    > Conclusion ( please flame me if I'm wrong ) : In a homoiconic system,
    > the need for interpreting information dependent on context, enforces
    > some hierchical property on the system.
    >
    i don't agree, of course. just take a look at the draft. you'll see that it describes contexts as identifying agents (i mean all the aspects of agents that you would want to apply) with structures of ongologies [ontologies] called "ontology frames". the frames are basically collections of nodes in an ontology graph, overlaid by a structure that i haven't looked into yet. the idea is that a context has a boundary, and that interpeting information from the outside of it requires some translation process from an exterior ontology to one of its own ontologies (represented by an arrow). within the context, perhaps the translations should be computable and completely defined, but that seems unnecessarily strict, since they should / could be used to build those transitions.
    the big idea, i think, is the use of a graph of ontologies with information interpretations between the ontologies as nodes. this graph will most certainly contain higher-order infinities of nodes as well as translations, but should be managable. the intention was to get around the strange properties of set theory, but the applications may be much wider in scope (i'm guessing)."

    At this point one can see that ontology seems to be understood as a synonym for context and information filter. See also the email of the 3rd of May 1999 for the further discussion of the terms context, agent, ontology frame, and ontology graph. See also the related comments below.
    We also note that specific aspects of agents are not specified, which are not related to general properties of the fields of Information System (IS) and Knowlege-Based System (KBS).

    29th of April 1999 "On Wed, Apr 28, 1999 [...] Rice Brian [...] wrote:
    [blah blah... petty insults deleted]
    [...]
    > mumbo jumbo? hofstadter? you think reading hofstadter will help you
    > understand the discussion? can't you think for yourself, instead of
    > disagreeing with everything that you hear? can't you give me a chance?
    > can't you even look at the references on the Tunes review pages?
    Yeah, even Hofstadter is slightly more understandable than your arrow paper, I'm sorry to say. If you've got something there, only a really bright person with an extensive CS background would recognize it, the way you've written it. Likewise, it would take such a person to recognize your paper as mumbo-jumbo, if that's what it is. If you want *my* support, get out of this mindset "If you can't understand my great scientific work, you must be a moron" and write something I can read without losing consciousness! Just tell me the gist of it, and let me use my imagination. You've got to convince me that reading your paper isn't a complete waste of time."

    What should we say? :D Read the whole Clarification and related notes, explanations, clarifications, and investigations.
    We do apologize for being boring with our mumbo jumbo, but we must go through this blah blah blah and high tech stuff at the edge of the imaginable to get to the ground and then climb up again.

    4th of June 1999 "this is an old post by some months, but i thought that i should comment.
    > This comes from the new preface to the 20 anniversary Edition of
    > Godel, Escher, Bach. It made me think of Tunes and all our continued
    > discussion of reflection as a primary mechanism.
    >
    > ...one thing has to be said straight off: the Godelian strange loop
    > that arises in formal systems in mathematics (i.e., collections of
    > rules for churning out an endless series of mathematical truths solely
    > by mechanical symbol-shunting without any regard to meanings or ideas
    > hidden in the shapes being manipulated) is a loop that allows such a
    > system to "percieve itself", to talk about itself, to be "self-aware",
    > and in a sense it would not be going to far to say that by virtue of
    > having suh a loop, a formal system _acquires a self_.
    >
    > - Douglas Hofstadter
    >
    while i'm not sure from this statement what exactly the "strange loop" is, i do know what the statement identifies: a mathematical model of a theory that extends below the level of logic. in other words, it's not just an inference system, it's the logical activity beneath: the activity of, say, the processing machine involved. otherwise, the logical structure would confer some small meaning on the shapes generated (as is true of a Robinson diagram or positive diagram in model theory).
    the statement suggests that this loop is due to the creation of some shape with which the processing system may identify. the presence of the underlying system in its environment suggests self-similarity within the closure of that system and the environment it supports. (if we represent the mechanical processes of these shapes via arrows, we then obtain graphs which are both infinite and self-similar.)
    of course, i suggest something more ambitious: the ability of a system to modify itself in arbitrary ways suggests that this loop is not unique (perhaps over time or context-shift). so, in order to maintain this sort of reflection, the system's environment structures (or type system, if you prefer) must contain all of those possible selves (or models of self), or at least maintain 'self-representation' over context-shifts and self-modifications.
    now you all have another rational for the Arrow design."

    This statement is contradictory. We recall that the author has given in the chapter Introduction of the draft titled "The Arrow System" the following description: "From a mathematical perspective, the arrow world in terms of elementary model theory is a system for managing the Robinson diagrams and positive diagrams of all models that agents deem useful for a knowledge system." But with the statement about "a Robinson diagram or positive diagram in model theory" the Arrow System would not exhibit any self-representation, model of self, and strange loop, and acquire a self at all. This is even more contradictory in relation to a reflective system in general and this sort of reflection in particular, which eventually renders his theory and philosophy as incomplete, inconclusive, or even impossible.
    At this point, it can be understood that he has not understood that taking the self-similarity respectively the fractal structure as the grounding is the soluton.

    1st of May 1999 "I think that the arrow paper can trigers some interest around knowledge-level reflection. There is [are] real programs around this that can be try [tried], like the MetaKit, although I did not try that myself. Now, we get it more.
    Reinders, M., Vinkhuyzen, E., VOß, A., Akkermans, H., Balder l, J., Bartsch-Sporl, B.,Bredeweg, B., Drouven, U., van Harmelen, F., Karbach, W, Karssen, Z., Schreiber, G., Wielinga, B.: A Conceptual Modelling Framework for Knowledge-level Reflection. June/September 1991.
    Note that the term knowledge-level reflection was used before the term model-level reflection.

    19th of June 1999 "Arrow [graph] query...
    i am sending the following to the Tunes group as well, since it may clear up some present ambiguities:
    [ongoing discussion with Alexis concerning Maude's rewrite logic reflective capabilities and the Arrow idea.]"

    This is only relevant in relation to our Ontologic System (OS). But we wanted to mention it here for completeness.

    3rd fo May 1999 "

    > I think that the simplicity of the arrow system as presented in the paper
    > is the kind of thing that I would look for in a proposed computing system,
    > from the standpoints of implementation and usability. I think that the
    > overview of existing systems given in the paper makes a strong argument
    > for a simpler system (as well as being interesting in itself).
    >
    cool. thank you. that part of the paper was initially "ad hoc", but gradually i've been finding ways to integrate it into the argument. it still has a small ways to go, in terms of separating out small parts which don't belong and some other issues.
    > One question that I am inclined to ask is to what degree the arrow system
    > can be integrated with external systems. I have a few ideas about this,
    > but maybe not very good. Or, to look at it another way, if the arrow
    > system has the potential to store and manipulate information in a way that
    > would make more efficient human-computer interaction possible, what steps
    > can be taken to accomplish more efficient human-computer interaction
    > given the tools of (a) our current computing systems and (b) a program
    > constructed to manage structures of arrows. In other words, what first
    > steps can we look for (or make) in this area?
    >
    well, we should only have to figure out how to represent arrow worlds and implement their dynamics on an existing computing system, and to constructively implement their reflective actions. as to interfaces, i think that i should elaborate the sketch of the ontology graph (which is very difficult to draw) in a more detailed way.
    i'm not sure, offhand, about answers to (a) and (b), but you can guarantee that i'll be thinking a lot about it in the coming hours and days. i should have an answer soon.
    > I was a bit lost on a few of the definitions, but again, I think this is
    > mainly due to my own ignorance. Just to list a couple, e.g. why is an
    > ontological frame required? Are the terms context, ontology, and
    > ontological frame a necessary part of the arrow system, or are they only
    > introduced for the purposes of constructing an argument within the paper?
    >
    > I lost the distinction between these terms, but I'm not sure to what
    > degree this affected my understanding of the paper.
    >
    i have to admit that you've hit upon the most unclear part of the system, and the one that forms the greatest (most debatable) part of the Arrow argument: that the Arrow system, with an appropriate conceptual model (not necessarily part of the system, but easily understood within the system) could form a whole that enables the interface with society that we would want of an information system.
    ok, now for the laymen's terms. the important thing about an ontology is that it is an information filter; it interprets everything in the world in a relatively small number of terms. a mathematical model is then a kind of ontology, where everything is a group of variables. the same thing goes for a state machine, a data format, a communications protocol, or even a computer language. in the paper, i used the example of HLLs as ontologies in reflective programming systems to explain how the arow system (or any cybernetic useful information system) should architecturally differ from the ordinary type of system.
    in a Self system, for example, everything isn't just an object, it's a _Self_ object. that's what makes it an ontology. of course, you'll say,that's obvious. the difference is that i'd like to reify that and compare _what something is_ in one ontology with what that same thing is in another ontology. that's the purpose of giving agents _frames_ of ontologies. the structure that the frames impose over their ontologies is not specified, and is left up to the user or at least should be discussed by Tunes.
    how this relates to context, i'm only just now formalizing (many apologies). a context is supposed to take a model and make it implicit to the agent's actions. it's supposed to specify what is true here and now, not then and there, and what variables can change here and still leave "hereness" unchanged. this is supposed to be the user's greatest handle on the system: the ability to specify, generalize, and crosscut context in free ways.
    i leave it to you to think about, and i'll return with more answers when i'm clear on my own thoughts."

    See also the email of the 28th of April 1999 for the further discussion of the terms context, agent, ontology frame, and node in an ontology graph.
    Furthermore, communication protocol is not Software-Defined Networking (SDN) technology, which is used for network management.

    1st of July 1999 "[...]
    It is a draft, and an obsolete version at that. Currently, I am modularizing the issues that the paper addresses and improving on that basic modification. First, a few papers concerning the constructive ideas that I presented there will be released. They will concern "reflective relativised arrow logics / theory", "an abtract notion of computing ontology", "model-level reflection: the full role of computational reflection", and "information atomization" in that order. These will address separate notions that are presented almost concurrently in the paper that I have publically released. My reason for not having posted these yet is that they do not yet form a complete cycle of obsolescence for the original paper, in that they so far only add information while separating text per subject matter. I believe that release of the current versions of those papers would only confuse those who would try to track the development of my ideas. Also, my own efforts in explaining the terms due to changing definition texts would help little. They will be released soon, however.
    The next set of papers will form a dependent series. The first will explain the application of the arrow construct and logics and theory to the notion of information atomicity. The next paper will explain the porting of the ontology concept from current software paradigms to the Arrow system, discussing various means for modelling ontologies, as well as the logical constructions used to specify them initially. Also, it will highlight the novel properties of ontologies that result. A complete development system will be described in this way which subsumes the usual notions of I/O management, functional implementation, user-interface, and general-purpose information retrieval that usual computing systems must address in an ad-hoc manner. The paper will also elaborate upon the benefits of such a systematic approach over the relatively lame methods of today. The following paper addresses the overall characteristics of such a system, and could be used to describe a commercial implementation's features, explaining why and how they are useful. Of particular note will be software system unity and the various effects it could have on overall utility for society.
    [...]
    I have learned a great deal from discussions with the tunes members, particularly in attempting to explain some terms which were originally vague. My deepest regret is that the current paper will have to represent my ideas for a time, since I believe that I can explain much better in person (i.e. interactively). I also apologize to interested persons that I am keeping this development "under wraps", so to speak.
    [...]"

    24th of August 1999 "[..]
    >1) Practical application #1: Creating Ontonologies for every major
    >processor, architecture, OS, and environment, along with basic
    >programming theory and math (in that order). In this way you'd be able
    >to analyse a bitstream (program) and have it recompose it into another
    >form. For example taking a Windows program, and making it KDE on some
    >unix variant (due to there similarities in capability). There would be
    >a very huge demand for something of this nature, and may even be a money
    >making opportunity. Or the concept of "dedicated servers", in which an
    >entire OS environment with a single purpose (web serving, FTP, etc.)
    >could be created for almost any modelable purpose. This would totally
    >replace the need for "jack-off-all trades" type OS's (NT, Unix).
    >Companies usually only need certian capabilities, why not implement them
    >in the most efficent way possible for the given hardware.
    well, that's quite a lot of work to do, but then there are many programmers to be thrown around these days. the trick of course is to convince them to throw themselves at your own tasks. my focus is more related to ontologies that provide generic frameworks, and to use those to develop ones specific to a processor, etc. also, one big limitation on the ontology notion that i suggest is that translating between various ontologies is very often not computable or simply infeasible. also, if the user requests a translation, then the computing system needs to ask the right questions of the user to construct the desired kind of translation.

    [...]

    [...]
    the user/coder should always keep in mind the current ontology that they desire to build. the system of course will eventually be capable of analyzing such a development at a fine-scale, able to describe the intermediate states of ontologies (as they are built) as other ontologies. all that is required of an ontology is that it's elements providing meaning can be grouped together, which is relative to other ontologies (say, requiring ontologies to be consistent systems of predicates within a logic).
    one thing to add: arrows are epistemic constructs and ontologies are built from them. this is the philosophical view on the system's conceptual strategy.

    [...]

    >10) Practical Application #2: Language barriers. This system could be
    >used as a universal translator for human language, even from a voice
    >sample. Geeze, can't see any practical application for that...hehe
    shhh... :) (of course it will still take a lot of thought to put into a framework for langauges, but then i've been researching linguistics all along. so, yes, i do have plans in that direction)"

    21st of September 1999 "Subject + Object (Reflective Systems?)
    A reflective system is impossible without the subject. Reflection means at least comparison with previously gained knowledge about something. Moreover, different relations between the subject and the object should be present in any approach related to information.
    I begin this discussion in previous messages "Highlevel + Lowlevel" and "Hyperprogramming". And it is not occasionaly. The background of Uniform Abstract Language (UA) (which I try to promote) is the very relation between the subject and the object. Generally saying, it is very universal principle. Because, it embraces all. Usually we consider, say, a star or a tree as the object of researching. But if we want to take into account either we should consider them as the object. Or if you want, we can call a machine "black box" then the object is "that gets in" and the subject is "that results in". Or, in other words we will deal with a pair "components-result".
    Further, if we want to build really universal system we should not care about integers, floats, strings, or something else. We should just assume we have some pieces of information and we can associate them in some way. Kinds of association can be represented by mere known operations like - + * / . But there is a distinction between these operations in math (in which we have almost always something in a result) and in UA (in which we can have something as a result of an association with the help of one of these operations or nothing). The next step is defining different contexts for elements of information. And again we base only on relations between the subject and the object to obtain 4 kinds of contexts (quantitative, qualitative, relational, and modal). These contexts has basic elements they based on. They are correspondingly values, names, types, and modes.
    Maybe it is too abstract for many ones. But interesting is that I got the standard model of programming language basing on these principles. And, moreover, we could not walk in the dark any more. But we did. I say it because it is the fact that names, and types are present in programming NOT originally but appeared after. Now, using the principle "the subject-the object" we can even foresee what we will need in the future."

    When we read the title of this email we immediately thought what? Resource Description Framework (RDF). And then we thought what? Triple store. But this is not the topic here, but the Binary-Relational Model (BRM) came up on the 30th of July 2000.

    21st of October 1999 "To all actually interested in the Tunes project:
    This is not a question. This is the answer.
    Tunes is not for coders. The Tunes philosophy does not support the needs of C-programmers to continually re-write the same things over and over. Anyone out there who thinks that when Tunes exists, that they will "code" in it is drastically wrong, and I'll explain why:
    Tunes is about unifying languages (even domain-specific ones), and providing for automating this unification process. This still _requires_ human intervention, but the intervention required is not that of coding, for the simple reason that Tunes is about obviating that need. Consider what happens when you've fully formed your C or Lisp or Forth or Smalltalk program. It's done, and it works. But without Tunes meta-programming framework, how does that code get migrated? Changes outside the language framework are _manual_. Lifting the idea from the code is _manual_. And there's no place for a meta-framework that Tunes is in a language where you must specify everything that the language evaluator needs. To counter those of you who suggest that Tunes is about features, I disagree, because ultimately you will either encode those features in some language, or you will place it within Tunes. It is the ultimate chicken-before-the-egg issue. If you want to support Tunes, you have to give up the language idea, and that's why I am working on Arrow.
    So where is Tunes going? Arrow is Tunes. Any who disagree are those who read my words (as codes) and mistake my explanations for the idea. Many of you believe that we must have an OS to boot Tunes from in order to have Tunes. I agree on the fact that Tunes must make an OS framework (a la OSKit) in order to extend its usefulness, but requiring an OS to be on hand before we build Tunes is absurd! If the OS-code is not available as Tunes objects, then it's not useful, and therefore not Tunes.
    I want to offer the Arrow development to you. What it involves is modifying a Lisp environment so that it supports Arrow ideas. This involves some technical points which I will lay out in detail. There are a little over 100 people who receive Tunes mail, and I believe that over the past year, I've learned what nearly all of your goals are, via email or irc. From the information I have gathered, what I offer is not what you want. If you really want Tunes, you will be flexible enough to accept the tools that I have found to work with. But I do not see this happening without appealing to people outside the group.
    My point is that _I_ have done the research, _I_ have learned the ideas, and _I_ am working with the very necessary theory first-hand. My disadvantage is that I am a one-person development team, as I have been for six or seven years now. All aspects of Arrow (and therefore a good portion of Tunes, I argue) are mine, and unless you do something for Tunes other than write OS-code or your own programming languages (codes), then you are not contributing to Tunes.
    A common argument about Tunes is that it requires AI, yet your very efforts completely avoid the necessary issues. You effectively "pass the buck" to the research community at large, which is slow and cumbersome. Yet you all have intelligence sufficient to stop following your silly language ideas and learn beyond them, and Tunes *does* need that.
    I have much more to say, but I'm sure your egos are sufficiently insulted to listen to me now. You will either choose the future that Tunes must have to succeed, or you will choose to continue to waste the world's time with your pathetic lies about "what Tunes is".
    Finally, I have a request for Tril and others concerned to make a concerted effort to show *publicly* the results for Tunes that I am working on. The Arrow introduction certainly needs some work, I admit. But that work has been stalled because again _I_ am doing all the work. I need people who will take these ideas and present them to the world via the web site, and organize a *real* development effort to support Tunes.
    I'm sure that I will receive some very un-educated responses, as well as a few wise ones. Just keep in mind that I've done the work, and that you _as coders_ do not have the answers. I have learned to take on humility when it fits the situation, and this is time for you to do such. I have listened to all your ideas for far too long with too much respect, and all I have been is disappointed. Tunes needs no more "half-way" solutions. I've picked the tools for their simplicity of use in various aspects, and I've worked out a great deal of theory, particularly for the basic implementation ideas.
    Here is the line. Either cross it, and move on with Tunes into history, or detract from it as you have been wont to do. I will accept nothing else. I want Tunes, and I want it more badly than Fare or Tril or any of you do by any stretch of the imagination. Deal with it."

    This is one of the reasons why we quoted the work title "Introducing and Modeling Polycontextural Logics", which is about the implementation of the PolyContextural Logics (PCL) and the Proemial Combinator PR.

    23rd of October 1999 "[...]
    >On Fri, 22 Oct 1999, Brian Rice wrote:
    >
    >> Finally, I suppose that I should re-iterate why my explanations aren't
    >> fantastic (but improving). I learn from mathematical theory books and what
    >> I develop in my own mind from the patterns I see. I have no daily contact
    >> with any programmers or mathematicians of any sort, and so my feedback loop
    >> must run through this group. It follows that you _will_ hear some crappy
    >> explanations from me, along with some refined thought. We as Tunes must
    >> work through these ideas, because otherwise the product we eventually
    >> present will be un-intelligible to others.
    >
    >I think you underestimate the problem here. I haven't been following this
    >thread lately, but I did read your original proposal for Arrow, and I
    >think "un-intelligible" is not an entirely incorrect description.
    >Certainly this could just be my own lack of familiarity with the subject;
    >however, I can at least speak in relation to other similarly abstruse
    >documents I've read, e.g. the ANSI C++ draft standard, the Kolmogorov
    >complexity book, and most of the "introductory" papers on category theory.
    Well, I'm truly sorry that my work isn't yet as good as those who have spent years in academia refining their work with lots of peer review. That's what my original apology was for, but i can see that it fell on deaf ears. I of course agree that all the examples you mention are much better than my work so far. Thanks for all the help.
    >But there's a deeper issue here. You're talking about a new technology
    >that supposedly will revolutionize the way programmers (and perhaps even
    >end-users) develop software. There have been previous innovations like
    >this: structured programming, object-oriented programming, functional
    >programming, concurrent programming, component software (COM/OLE), etc.
    >Indeed, each of these started with a lot of rhetoric and circumlocution,
    >but as the ideas matured they ultimately shook out into a few simple
    >concepts that *anyone* can understand. This is partly by necessity;
    >otherwise the revolution's effect would be localized to the small elite
    >group capable of understanding it.
    Of course this is necessary, and I have no doubt that such a description can be distilled from all of this rhetoric once the Arrow environment gets "filled out", just as an object-oriented language needs a certain set of classes to develop applications from in order to be really useful. To those who have spent the time these last few months on IRC helping me in discussions, you would understand that there are a few simple concepts within Arrow: (1) epistemic constructs (no semantic content), (2) reifying ontologies to support every kind of semantic content desired, built from the epistemic constructs, and (3) the modality concept to analyze the possibilities available to an agent in a given context and yield a formal means for constructing a semantic framework (a logic) dynamically.
    All of these concepts work fundamentally differently from most every other concept in programming and natural language."
    >But I don't see this happening with Arrow: You've been writing essays,
    >e-mails, and papers about it for a long time and still most of your
    >audience seems to have no clue about what Arrow is (by your own admission),
    >let alone how they might contribute to its implementation.
    Well, my peer-review loop is very large. It takes forever to get good feedback on my ideas. Even when I was in college, no one took my ideas seriously at all, except for those who weren't [Computer Science (]cs[)] or [Electric Engineering (]ee[)] majors (not much has changed, it seems). I'm sorry for you that I'm not a bona fide member of academia. That way, you would only receive well-thought out work, and wouldn't appreciate at all what went into it.
    >This is dangerous for two reasons: First, it leads to the potential
    >"emperor's new clothes" scenario associated with vaporware. But more
    >importantly, if you can't clearly communicate about Arrow to humans, how
    >are we supposed to believe that you've developed a fundamentally simpler
    >framework for communicating about algorithms to machines?
    Well, coders will refuse to believe it, not based on a lack of logic on my part, but on a lack of logic on theirs. I contend that coders' inherent desires to code will prohibit them from actually considering the ideas I propose. Arrow is not just about communicating algorithms; programming languages are for that (i.e. ontologies). The simplicity lies in the means of lifting arbitrary patterns from algorithms (or otherwise). The problem is the (dare I say it?) paradigm-shift required to be made to support such an environment.
    >This is not intended as destructive criticism. Rather, I'm offering the
    >constructive criticism that your statement "I have no daily contact with
    >any programmers or mathematicians of any sort" smacks of hubris at best.
    I EMPHASIZE: I am an enlisted technician in the U.S. Navy. It's a job I picked so that I could save money to return to college a couple of years from now. I don't know any programmers personnally that have half a clue about the subject itself, let alone anyone who can deal well with higher-order mathematics. Occasionally I get to meet with low-level engineers who treat me like s***. I did not plan this situation, and I don't like it one bit. In fact, it grates on me daily.
    >As far as I can see, "Where Tunes is going" is in circles. And that's
    >disappointing, considering the straightforwardness of its goals.
    What straightforwardness? Making a bootable reflective Lisp environment? Sure, that's straightforward, but there seems to be quite a few people who think that this will not support the Tunes goals as well as originally thought. Or perhaps I am wrong? Consider the big blank void that (officially) fills the requirements for the HLL and meta-translator projects, I can but conclude otherwise.
    And oh yes, I forgot to mention (specifically) that Arrow is my idea of the Tunes HLL. I suppose that the ontology system would supply the meta-translator idea. Arrow is already sufficiently close to a language of Cons cells that this should be relatively obvious if you've ever mulled over the HLL specs after having covered all of the concepts that modern advanced programming languages address."

    We note that on this level of the Arrow System we have philosophy of mathematics, but no Natural Language Processing (NLP).
    We also note that the mentioned ontology system is used in the context of philosophy and cybernetics, but is not related to the Semantic (World Wide) Web (SWWW).
    Very surprisingly (not really), the SWWW was also proposed some few months before on the basis of the Binary-Relational Algebra (BRA) with binary-relational calculus, and a Binary-Relational Model (BRM), and also the Arrow Logic (AL), and a graph model.

    23rd of October 1999 "[...]
    >You want peer review, you got it :)
    >
    >> Of course this is necessary, and I have no doubt that such a description
    >> can be distilled from all of this rhetoric once the Arrow environment gets
    >> "filled out", just as an object-oriented language needs a certain set of
    >> classes to develop applications from in order to be really useful. To
    >> those who have spent the time these last few months on IRC helping me in
    >> discussions, you would understand that there are a few simple concepts
    >> within Arrow: (1) epistemic constructs (no semantic content), (2) reifying
    >> ontologies to support every kind of semantic content desired, built from
    >> the epistemic constructs, and (3) the modality concept to analyze the
    >> possibilities available to an agent in a given context and yield a formal
    >> means for constructing a semantic framework (a logic) dynamically.
    >
    >This could be a nice simple explanation if it didn't use special
    >vocabulary. Epistemic? Ontologies? My dictionary says only that
    >epistemology and ontology are obscure branches of philosophy, dealing with
    >human knowledge and existentialism... too vague. Reify and Modality
    >aren't in the dictionary. What do _you_ mean by Agent? I know what
    >"semantic" means, but I'm probably in the minority. Framework - that
    >sounds simple, but "semantic framework"? Uhh.. throw me a bone here :)
    >Unless the people you're trying to reach are at the top of the ivory
    >tower, with PhD's in CS and Philosophy, you had better put your message in
    >layman's terms.
    No web search? Are you crazy? (I suggest going to www.ask.com, and asking "What is an ontology?")
    Epistemic ideas are those that are not concerned with interpretation or representation. For instance, both bits and arrows are formal epistemic constructs for information. They can be represented and used in arbitrary ways. Other examples include electrical signals travelling through neurons. The importance of the concept is that you can model something without inherently commiting it to a particular domain of knowledge. BTW, there are examples for using arrows as information constructs, but I haven't yet found them on line. To describe it basically, arrow collections form functions, and "partial function specifications" are similar to "incomplete bit-strings". The relations are due to basic information theory, and I can elaborate as necessary.
    Ontologies are modular specifications of semantic domains, applied usuallyin the Knowledge-Representation field of AI. My idea for these is to extend them so that they don't simply define first-order terms, but that they also define higher-order terms, allowing them to specify higher-order programming languages. In Prolog and Lisp (and lately, XML), the usual ontologies relate terms and attributes as predicates to basic definitions with attributes in another domain's language. Basically, they contain a set of formal definitions which are applied modularly. In the usual fields, ontologies don't provide algorithms, just ways to "effectively compute" the meaning of a term in some code. My idea to extend this makes ontologies powerful enough to describe any formal (higher-order or otherwise) system, and to do it using epistemic constructs, so that the information imparted can be "lifted" as easily as possible from the original domain.
    http://wings.buffalo.edu/academic/department/philosophy/ontology/
    http://www.signiform.com/tt/htm/tt.htm
    http://www.kr.org/top/
    (These represent a very small sample of the total effort into this subject.)
    As for reifying ontologies dynamically, consider that they could be modelled as arrows (a bit of a stretch, I know). The important aspects are that they link the terms from one domain to another and that they aren't unique (i.e. there may be more than one way to define a set of terms with another). Another aspect is that, as sets of definitions between domains, the ontologies are composable and reversible (as I've described arrows as being). Essentially, I'm talking about a category of ontologies, for those familiar with categories. It's a web connecting domains of information and knowledge, constructed formally and providing a facility for automatically composing sets of definitions into new ones. Now, ultimately this web should be reflective: that is, it should address issues that affect it's internal representation and implementation, which would make the notion more complex, but at the same time it would make it usable as the Tunes eta-translator framework.
    The agent concept is just a way of talking about the information stores that communicate via ontologies. Agents can accept or decline the applicability of ontologies to their purpose. Some ontologies might provide a useful set of meanings for an agent, while others might not. Agents don't have to be limited to a single domain of knowledge or a single computational thread, they merely represent some coherent task. I don't have a good model for an agent just yet, but that could change soon.
    The modality concept is a bit rougher. If you consider a relation as providing a web of allowable transitions in a formal system (as "aRb" would imply that it is possible to move from "a" to "b" via "R"). This is the world of modal logic. Basically, the modality concept applies to sets of relations that interact. For instance, the web (or hierarchy) of possible terms that can be defined in a particular programming language defines a particular modality. You could consider it a "way of getting what you want" via small formal steps allowed by a specification.
    Well, that should prompt plenty more questions, and I'll put together some URL's to help explain further. And oh yes, thank you very much for the response."

    And suddenly, he is also an expert in the fields of ontology, Agent-Based System (ABS), and so on, which for sure was another activity of C.S. at this time.
    Somehow we got the impression at this point, that they have observed and than concluded that C.S. was going to solve the puzzles and problems of the last 40 years or more and created this Truman Show around us, which C.S. also already observed and eventually found out some few years later.

    4th of November 1999 "I will be busy for a few days, but here's a quick note to get your minds working on the arrow idea. I mentioned that there wasn't a clear framework for getting at arrow meta-information (i.e. arrows referring to a given arrow), but it turns out to be quite simple: simply invert the arrows of car-graph and/or cdr-graph, and apply this inverse to an arrow to obtain all arrows referring to that one, with elementary classification accoring to whether it's the car or cdr reference.
    Also, there is the extension of 'apply' to take entire graphs and have their arrows collectively churned by another graph acting as a function. However, there are many ways to do this, so the default semantics I will not state until I am sure of it. Note also that graphs are state-machines, and that as such the 'apply' mechanism encapsulates all updates to the machine-state from the given one.
    So, what remains is a good picture of the system that the basic arrow framework can reflect on, and use this as our graph-construction framework."

    What should we say? Graph-Based Knowledge Base (GBKB) or Knowledge Graph (KG), PROgramming with Graph REwriting Systems, Maude, and the other components of our synthesis called Ontologic System (OS).

    30th of July 2000 "A Language for Binary Relational Algebra (close to first-order arrow logic)
    Apparently, someone has implemented a lazy implementation of the BRA calculus, which is similar to the arrow logic I have mentioned before. It's based on Prolog, which results in a lot of properties which make it unsuitable for my work in a direct way, but it's useful to look at, if you want to see how arrow ideas translate into programming practice.
    [...]
    I'd like to see someone else write up a review entry for this, as my own opinion is somewhat biased. Besides, I seem to have little time lately for anything other than the basic research I am doing into making Tunes-like object systems out of existing language ideas (focussing of course on Slate and Arrow for now). My job is taking up a great deal of time lately, so please bear with me."

    What should we say? Too bad that we only found the mailing list now. Howsoever, we simply say triple store and Resource Description Framework (RDF), and refer to our related publications, creations, explanations, and clarifications, specifically the ones given herein {which means the Clarification of the 8th of May 2022}.

    20th of September 2000 "Humpty Dumpty (was: reflection)
    I don't want to burst Pinker's bubble, but that was done nearly 50 years ago by cybernetics, a group of quite talented men who in a sense were advocates of the computational theory as well. In fact it is Fare's references (if not reverences) to this group in his Tunes documentation that has proded me to insure that we stay on one side of the line without drifting over.
    Now among and prominent in the group was D. Ross Ashby who authored two books, "Introduction to Cybernetics" and "Design for a Brain". The first is very readable for the casual reader and the second is mind boggling in that it has had a lasting impact on my thinking. In "Design for a Brain" Ashby describes his "homeostat" an electro-mechanical system to emulate adaptive behavior. Of all the cyberneticians (if we may call them that) his "homeostat" took an infinitesimal step toward toward their goal. However infinitesimal it was it was further than any of the others ever progressed.
    The homeostat, basically eight identical "simple EM devices" totally interconnected exhibited "adaptive behavior". What disturbed everyone was that it did it without "help", i.e. intrinsically. To insure there is no confusion here that means without the need for external intervention, i.e. human programming. You could if you so chose say that it instructed itself without the need for an (initial) instruction set.
    That does not mean that one cannot advocate a computational theory of the mind and be quite correct in doing so. It does mean that at least one instance in which it does not occur also works as well. In fact no computational model to date has ever achieve anything on its own equal to that infinitesimal step of the homeostat. Without the external intervention, excluding the creation of the devices and their interconnection, the homeostat "exhibited" sentient behavior, i.e. something from "within" itself on its own.
    Now I have to admit that my guide to the non-computational theory of the mind comes from the writing of a second-generation neurosurgeon, Antonio R. Damasio, in his book "Descartes Error: Emotion, Reason, and the Human Brain". I wont bother you with W. Gray Walter and the others who have also contributed to this subject.
    The point is that I am a guilty as anyone else in ascribing, describing, and transcribing programmable (and programmed) behavior exhibited in hardware under control of software. However, when I do so it is with the constant reminder that what is exhibited is not "intrinsic" to the software in that it was constructed to do so. In fact if it does not work as we wished, we treat it as an "error" and institute "corrective behavior modification". We do so by "physically" changing the software though we do not "physically" change the hardware. The difference is that we can clearly separate the software from the hardware, something which does not occur in the human brain.
    In point of fact we can engage in mimicry in software to any level of sophistication or in the instance of reflection to any level of reflection on reflection (recursive behavior). We can do so without fear of stepping over the line separating sentient from non-sentient behavior. As long as we choose a programmable means, externally developed software to initiate an activity (behavior) on a software/hardware system we will never produce sentient behavior (by definition).
    I go through all this because it is important for you, for me, for Pinker, and for anyone else to realise that a "computational theory" can work without an external program (software) seed. In fact for sentient behavior (non-mimicry) it's a requirement.
    When dig down into the negative reactions often expressed to Ashby's work with the homeostat when no "deus ex machina" was necessary, that something could exhibit adaptive (survivor) behavior on its own without external intervention it offered the possibility that God was unnecessary. The problem with such fears is that they do not understand the basic assumption underlying faith (the absence of evidence). The larger problem, of course, is ascribing human limits to God.
    With respect to software we are God, the deus ex machina. We can make it dance, sing, and follow the pulling of our strings. It's only when we remove ourselves, allowing the hardware (the computer) to develop its behavior intrinsically, that sentience is possible. For us that simply means a computer without an intrinsic (fixed) instruction set, but one that on "reflection" it develops dynamically on its own.
    Of course such a system is absolutely useless to us unless it "agrees" to cooperate in some manner. Understanding this you will not gain anything other than "feckless" venture capital to mass produce this for a market. Of course, this probably represents my limited marketing ability and vision.
    I have no "flame" to offer you. I just want to insure that whenever we delve into higher abstractions and esoterics of Tunes requirements and features that we in our thinking also do not cross the line. Ashby dashed the hopes of men who would play God. Only God can play that game.
    [Lynn H. Maxso]"

    The writer of this email lacks a lot of knowledge about relevant topics and contents, but nevertheless mentions some interesting points. For example, we use randomness in Evolutionary Computing (EC), so that for a very large extent a god is not required for a reflective system, which is also a creative system and embryonic system. Or should we say creating system respectively pocket god? :)
    Adaptive behaviour is a topic, which become interesting once again with the subsumption architecture and works like the ones titled "Intelligence without Reason" and "Intelligence without Representation", and written by Rodney A. Brooks. One of the succeeding works of "Design for a Brain" is titled "Building Brains for Bodies" and written by Rodney A. Brooks and Lynn Andrea Stein.
    By the way: Do not discuss about a god in such a discussion, because that ends it.

    25th of September 2000 "Emergence of behavior through software

    From: Francois-Rene Rideau [mailto:fare@tunes.org]
    >On Sun, Sep 24, 2000 at 06:21:57PM -0700, Lynn H. Maxson wrote:
    >> Well, you're entitled to your view and I will respect it.
    >> But that machine will not be exhibit von Neumann architecture nor
    >> Turing rules. I will go out on a limb further to say that its
    >> software, what it does, will not be separable from its hardware,
    >> how it does it.
    >What part of "universal machine" don't you understand?
    The part where someone started believing that "universal machine" has ANY connection whatsoever to reality.
    -Billy [Tanksley] "
    Bingo!!! Obviously, our Evoos took over as the original and unique work of art.

    25th of September 2000 "Billy [Tanksley] wrote:
    "The part where someone started believing that "universal machine" has ANY connection whatsoever to reality."

    Massimo Dentico wrote:
    "This is the fun part of your message: you *seem* covertly despise the philosophy and then you propose the same theme of a philosopher like Penrose."

    Then finally Kyle Lahnadoski wrote:
    "But I suspect that [Quantum Mechanics (]QM[)] is just a statistical approach to an unknown deterministic process."

    First off I have to apologize for not being familiar with Penrose. As I said early on in this thread my reference relative to the brain is Antonio R. Demasio's "Descartes' Error: Emotion, Reason, and the Human Brain". I am not quite the blunt disbelief of Billy, don't wish to argue an unprovable belief in either determinism or non-determinism, or dispute that at the quantum level the observation (which involves quanta) interferes with (becomes part of) the process: the Heisenberg principle of uncertainty.
    [...]
    When you look at the human brain and nervous system with what little we have learned of it and then look at a computing system of hardware and software, both of which we know to the most intimate detail they are different constructs entirely. There are no fixed logic circuits in the brain (and, or, and not), no linear memory, no linear addressing, no instruction set. As what is there has been sufficient over time to allow us to construct computers and software, i.e. that their components have a realization, the question arises can the reverse also occur?
    Therein lies the crux of our differences. Can the computer do for the brain what the brain has done for it? Even with extensive assistance from us? If it is von Neumann architecture, Turing computational rules, fixed instruction set, fixed internal logic circuits, and linear addressable memory, I say no. There's no "magic" in that box.
    Fare believes otherwise, that you can go up levels of abstraction and that at some point in that upward path you achieve a capability not present in any of the lower. Something additional happens entirely free of all that has gone before. If I understand Kyle Lahnakoski correctly with his purely deterministic universe, this doesn't happen even in the brain: everything that occurs can be accounted for by everything below it. What cofuses me is that he offers this in support of Fare.
    A natural question lies in asking the conditions under which this "spontaneous generation" occurs. If it is levels of abstraction, then how many levels is it? What is the magic number? Where has it occurred. Certainly not in any of the examples he has furnished. He says in commenting on one example that we cannot fathom the result, i.e. we cannot in an interval which we can commit follow the logic which produced the results. However we can write software with deterministic logic that can produce results which we cannot replicate on our own. It still doesn't mean that anything "extra" occurred only that we used a tool as an extension of our capabilities. It does in the large what we can only do in the small. It extends our limits. Good tool.
    Fare is entitled to his opinions and the means he has chosen for his path to discovery. If at some point his opinion becomes provable fact in a scientific sense, then no such argument pursued here will continue. I wish him well. Personally I don't feel any of it is necessary to achieve the goals or meet the requirements of the Tunes project or the Tunes HLL. If we achieve them without the need for something extra, then I question even bringing it up.
    [...]
    [Lynn H. Maxson]"

    Bingo!!! And keep in mind that C.S. added Evoutionary Computing (EC), because

  • EC adds creativity to the AI and CogS, and
  • Genetic Programming (GP) even expands and growth the AI and CogS further.
    See also the OntoLix and OntoLinux Website update of the 1st of April 2015.
    Also note that the work of Damasio cited in this email is the one that is referenced in The Proposal.

    7th of November 2000 "Emergence of behavior through software
    From: Francois-Rene Rideau [mailto:fare@tunes.org]
    >> If we can't make AI, then we can't realise your vision of Tunes.
    >Wrong. Tunes' goal is not, has never been, and will never be
    >to make an AI.
    That's very clear. Thank you. Further discussion on AIs is not a topic for any TUNES discussion list.
    [...]
    -Billy"

    But our Evoos is AI and also cites the TUNES project, specifically Rideau and reflection.

    14th of November 2000 "everyobdy who is talking on this list seems to be discussing "emergant behavior" or some thing like that. Let me try to ground the conversation in the only semi-well developed example of emergant behavior. I will do this to present a definition of emergant behavior that I hope people will use as a discriminator so that "emergant behavior" can be rescued from the trashbin of buzz.... =P
    The example I will use is the Neural Network. This is an emergant system because what a neural network DOES is in no obvious way retated to how its put togeather! ooooh. :0 I mean how is a neural network implemented? By adding a bunch of registers against the matrix to yield a result and then adjusting the matrix to yield better behavior on the next iteration. (more or less). The result of this operation is a function such as speech recognition or hand-reading (OCR). But again the system that emerges is, in a way, perpendicular to the system that is implemeted in the computer. =P
    It is this property of perpendicularity that defines the "emergant system" I mean if you implement a bunch of objects or any other abstraction of your choice, and these objects work towards your goal, you have already failed! It's this non-sequeter aspect that is key. Take consciousness itself. People have been scratching their heads for years aobut how it can be reduced to meat. In a way it can't be. The reductionistic aproach is almost completely barred from the playing field. At the end of the day the neurons in the brain implement a rather simple and regular algorythm hundreds of thousands of times over. It it is from the re-iteration of this algorythm that pattern emerges and life begins. =\
    [...]
    [Alan Grimes]"

    Bingo!!!

    Comment
    The same suspicious contents in the emails of the mailing list of the TUNES project:
    That we also find before the publication of our Evoos

  • algebra,
  • embryo,
  • Information Theory (IT), including Algorithmic Information Theory (AIT),
  • Model Theory (MT), including Robinson diagram or positive diagram,
  • epistemology,
  • existence,
  • strange loop,
  • self-similarity, but no conclusion of self-similarity respectively the fractal structure is grounding,
  • kernel-less respectively Hardware Abstraction Layer (HAL), nucleus, nanokernel, and microkernel,
  • ...

    and after the publication of Evoos

  • Cognitive System (CogS),
  • ...

    in the emails of the mailing list is even more highly suspicious now and finally gives it the rest.

    For sure, we have the fields of

  • logics
    • First-Order Logic (FOL),
    • Modal Logic (ML),
    • Arrow Logic (AL),
    • etc.,
  • Information Theory,
    • Algorithmic Information Theory (AIT),
  • Model Theory and Robinson,
    • knowledge-level reflection or model-level reflection,
  • Category Theory,
  • embryology,
  • philosophy
    • ontology
  • psychology,
    • epistemology,
  • cybernetics,
  • and so on.

    But ... a 21 years old self-taught programmer and amateur mathematician, and engineer on duty on the relatively loud and very big aircraft carrier USS Carl Vinson on a mission traveling over the Pacific to the Gulf of ... and having no time to work on any other matter at all at that time, such as the TUNES project, is able to write such a deep and broad document about highly complex fields and even more complex topics of them, like the draft Arrow System version 8 (with 25 or 45 pages depending on the text format), in just only 3 months. Never ever.
    It is just not possible for a relatively normal person to start from zero knowledge, and find, read, and learn to use so much contents of so many highly specialized fields and related documents in just 5 years, which is required to make such statements in emails and write such a document. Only autistic savants are able to amasse so much content and in such a short time, if they have the time, but they are unable to create something new or discuss on a mailing list.
    C.S. only could do it due to going to school respectively college until the age of around 19 and a half years and then to university for around 5 years for studying very profound hardcore Informatics, inclusive Lisp→Scheme, Logic Systems of Informatics, operating systems, database management systems, computer networks and distributed systems, software technology and software architecture, and so on, and also a lot more in the course of research and development in our corporation.
    And the author of the Arrow System was coming up with the exact complement and was doing exactly the same at exactly the same time like C.S.. Uh, ... yes, indeed, we should say reflecting C.S. to make sense.

    Furthermore, we have a very high degree of conformity of the Arrow System and our Evoos, the probability that such an equivalence is the result of a happenstance is even so high that a scientific theory would be regarded as law in this observable universe. This means or implies that the Arrow System and the related part of our Evoos are the same. But this is not possible in case of such a highly complex and difficult work.
    There must have been a transfer of information in only one direction. We do known that we were spied out at that time already due to the often mentioned

  • IBM Unified Modeling Language (UML) issue,
  • Lamborghini Diablo issue, and
  • event that the book Genetic Programming was added to the library of the university at exactly the time when C.S. begun the work on Evoos in 1998.

    In addition, we had made the same observation in relation to other fields, such as Multi-Agent System (MAS), Cognitive System (CogS), and Cognitive Agent System (CAS).
    But the author could argue that C.S. simply plagiarized his Arrow System.

    But the one or more authors of the Arrow System owe the public a truly plausible explanation why they want this and that, specifically the development process of a Virtual Machine (VM) with a bootstrapping process, a "critical virtual mass", and a metamorphosis process instead of just implementing the operating system of the TUNES project or a VM as discussed on the mailing list, for example by taking the already existing works, including the implementation of the Proemial Combinator PR, in contrast to C.S..

    Go away with that.
    We have no other explanation than that an illegal flow of information respectively espionage has happened.

    Also, the guy of the France Telecom came up with AIT and microkernel also at exactly the same time, which we already mentioned.

    10:40 UTC+2
    Summary of website revision

    *** might become a clarification ***

    An online encyclopedia about the subject connectionism: "Connectionism is an approach in the field of cognitive science that hopes to explain mental phenomena using artificial neural networks (ANN).[1] Connectionism presents a cognitive theory based on simultaneously occurring, distributed signal activity via connections that can be represented numerically, where learning occurs by modifying connection strengths based on experience.[2]
    Some advantages of the connectionist approach include its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity for graceful degradation.[3] Some disadvantages include the difficulty in deciphering how ANNs process information, or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level.[2]
    The success of deep learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increased interpretability problems.[1] Connectionism is seen by many to offer an alternative to classical theories of mind based on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.[1]

    Basic principles
    The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses, as in the human brain.

    Spreading activation
    [...]

    Neural networks
    Neural networks are by far the most commonly used connectionist model today. Though there are a large variety of neural network models, they almost always follow two basic principles regarding the mind:
    1. Any mental state can be described as an (N)-dimensional vector of numeric activation values over neural units in a network.
    2. Memory is created by modifying the strength of the connections between neural units. The connection strengths, or "weights", are generally represented as an N×M matrix.
    Most of the variety among neural network models comes from:

  • Interpretation of units: Units can be interpreted as neurons or groups of neurons.
  • Definition of activation: Activation can be defined in a variety of ways. [...]
  • Learning algorithm: Different networks modify their connections differently. In general, any mathematically defined change in connection weights over time is referred to as the "learning algorithm".

    Biological realism
    [...]

    Learning
    The weights in a neural network are adjusted according to some learning rule or algorithm [...]. Thus, connectionists have created many sophisticated learning procedures for neural networks. Learning always involves modifying the connection weights. In general, these involve mathematical formulas to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units. Several studies have been focused on designing teaching-learning methods based on connectionism.[14]
    [...]

    Parallel distributed processing
    The prevailing connectionist approach today was originally known as parallel distributed processing (PDP)[, or distributed connectionist, or distributed neural network belonging to the subsymbolic approach]. It was an artificial neural network approach that stressed the parallel nature of neural processing, and the distributed nature of neural representations. It provided a general mathematical framework for researchers to operate in. The framework involved eight major aspects:

  • A set of processing units, represented by a set of integers.
  • An activation for each unit, represented by a vector of time-dependent functions.
  • An output function for each unit, represented by a vector of functions on the activations.
  • A pattern of connectivity among units, represented by a matrix of real numbers indicating connection strength.
  • A propagation rule spreading the activations via the connections, represented by a function on the output of the units.
  • An activation rule for combining inputs to a unit to determine its new activation, represented by a function on the current activation and propagation.
  • A learning rule for modifying connections based on experience, represented by a change in the weights based on any number of variables.
  • An environment that provides the system with experience, represented by sets of activation vectors for some subset of the units.

    [...]

    Earlier work
    PDP's direct roots were the perceptron theories of researchers such as Frank Rosenblatt from the 1950s and 1960s. But perceptron models were made very unpopular by the book Perceptrons by Marvin Minsky and Seymour Papert, published in 1969. It demonstrated the limits on the sorts of functions that single-layered (no hidden layer) perceptrons can calculate, showing that even simple functions like the exclusive disjunction (XOR) could not be handled properly. The PDP books overcame this limitation by showing that multi-level, non-linear neural networks were far more robust and could be used for a vast array of functions.[15]
    Many earlier researchers advocated connectionist style models, for example in the 1940s and 1950s, Warren McCulloch and Walter Pitts (MP neuron), Donald Olding Hebb, and Karl Lashley. McCulloch and Pitts showed how neural systems could implement first-order logic: Their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. [...]

    Connectionism apart from PDP
    [...]
    There are also hybrid connectionist models, mostly mixing symbolic representations with neural network models.

    Connectionism vs. computationalism debate
    [...]
    In 2014, [...] DeepMind published a series of papers In 1999, C.S. discussed and published the Evolutionary operating system (Evoos) in The Proposal describing a novel [learning method called Self-Supervised Learning (SSL) and a] Deep Neural Network structure [now] called the Neural Turing Machine[...] able to read symbols on a tape and store symbols in memory. Relational Networks, another Deep Network module published by DeepMind C.S. with the Ontologic System (OS), are able to create object-like representations and manipulate them to answer complex questions. Relational Networks and Neural Turing Machines are further evidence that connectionism and computationalism need not be at odds."

    Comment
    First of all, we note that "[n]eural networks is the parent discipline of which connectionism is a recent incarnation" according to Rodney A. Brooks: Intelligence without Representation.
    We also note the document titled "Integrated Connectionist Models: Building AI Systems on Subsymbolic Foundations", which describes the Distributed Artificial Neural Network (DANN) model DIstributed SCript processing and Episodic memorRy Network (DISCERN), and the related discussion in the Clarification of the 28th of April 2016.

    Exactly, just some more intended and accomplished outstanding achievements of C.S..

    Furthermore, we looked at the ML models based on the attention mechanism

  • transformer,
  • reformer, and
  • perceiver, and also
  • sparse transformer,
  • routing transformer,
  • sequenzer,
  • linformer,
  • compressive transformer, as well as
  • Attention Free Transformer (AFT).

    The Q, K, V of the attention mechanism is usually realized as an MLP and self-attention is only another name for the matrix operation of an MLP.
    In fact,

  • cross-attention of perceiver uses MLP and
  • AFT is a plugin replacement for the Multi-Head Attention (MHA) operation in the basic transformer model and can be viewed as the use of MLP inplace of the attention operation.

    Eventually, we got the confirmation for our conclusion around the years 1998 to 2001 that

  • it ends in fully connected ANN, such as MLP, and
  • every ANN, including RNN, CNN, and whatsoever, its
    • structure can be described as matrix,
    • function can be represented by matrix arithmetics, and
    • operation can be executed as matrix addition and multiplication. (mathematics, (linear) algebra, group)

    and our final decision. {Have we already made a related publication in the past?}

    As we said in the past, we have calculated all variants of a foundational n-layered ANN (see also the Clarification of the 8th of July 2016).
    This also explains, why we have as one of the basic properties the fields of

  • Computer-Aided technologies (CAx), specifically in this context CAEngineering, which are based on the Finite Element Method (FEM) and Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), MultiBody Dynamics (MBD), etc., which again are based on solving differential equations and algebraic equations with linear algebra respectively doing matrix operations, and
  • Problem Solving Environment (PSE), which is build around or even integrates a system of CAx.

    Together with

  • cybernetics, specifically G&uul;nther's kenogrammatics, proemiality, and polycontexturality, and
  • Model Theory (MT) and the other fields, specifically Robinson diagram, and
  • what we said before in relation to (deterministic) chaos and order, fractal, and so on,

    we finally concluded at that time, what we described as fractal (fractal) → fractal, which differs from graph (graph) → graph, including arrow (arrow) → arrow.

    This means that eventually we already have all possible variants of ANN and related matrices and matrix operations no matter how fancy and elaborated they are.
    This means also that all those ML models are merely implementations of individual variants. Optimizations, like for example AFT, are merely improvements of related matrix operations.

    We also looked once again at

  • MLP,
  • modular ANN, and
  • hybrid and modular ANN,

    as well as the connectionist ML model respectively ANN learning approaches or methods

  • Unsupervised Learning (UL or USL), and
  • Supervised Learning (SL or SupL), and also
  • Hybrid Learning (HL), also called Self-Supervised Learning (SSL), involving UL respectively pretraining followed by SL or UL respectively fine-tuning, as well as
  • Transfer Learning (TL).

    We also quote a webpage about Self-Supervised Learning (SSL) publicated on the 27th of May 2020: "What is Self-Supervised Learning? [] Will machines ever be able to learn like humans?
    [...] Using self-supervised learning machines can predict through natural evolution and consequences of its actions, similar to how newborns can learn incredible amounts of information in their first weeks/months of life by observing and being and curious. [...]" Bingo!!!

    Comment
    We never claimed more in relation to our Evoos.
    Do not confuse Hybrid Learning (HL), based on Modular ANN with SSL.
    Guess why it was called Self-... (see also once again the Clarification of 28th of April 2016).
    Obviously, wav2vec and transformer are based on SSL and hence on Evoos.

    Either

  • all that illegal Free and Open Source Software (FOSS) will be removed As Soon As Possible Or Better Said Immediately (ASAP OBSI), which is based on
    • on one or more of the basic technologies created with our Evoos with its EvoA or our OS with its OSA, or
      • Self-Supervised Learning (SSL),
      • multimodal ML, CI, SC, ANN, EC, etc.,
      • polycontextural ML, CI, SC, ANN, EC, etc., and
      • any other creation,

      or

    • on one or more of the basic technologies integrated by our Evoos with its EvoA or our OS with its OSA, such as for example
      • modular ANN,
      • Unsupervised Learning (UL),
      • Supervised Learning (SL),
      • Hybrid Learning (HL),
      • multimodality,
      • polycontexturality (subjectivity and common sense), and
      • any other integration,

    (e.g. Transformer, Web2vec, BERT, GPT, AFT, PaLM, and so on), or

  • we will not modify and license this part of our Ontologic System, because we will
    • not clean up that mess in the fields of HardBionics (HB) and SoftBionics (SB) (e.g. AI, ML, CV, CI, ANN, MAS, CAS, EC, SI, etc.) deliberately created by companies, like for example Alphabet (Google), Microsoft, Tesla, OpenAI, Meta (Facebook), and Co., and
    • not tolerate that
      • entities are creating an alternative reality by simulating an ordinary technological progress as some kind of a deep fake,
      • individual basic parts and creations of us are scrapped and split off of, but a causal link with the originals has not been avoided, and
      • integrities of C.S. and our corporation are attacked.

    We already said that the SoftBionics as a Service (SBaaS) capability models and operational models (SBaaSx) will be exclusive, though we have only said it.


    26.June.2022

    05:38 UTC+2
    Short summary of clarification

    We have continued the work related to the Clarification of the 8th of May 2022.

    Honestly, the author of the Arrow System might have been able to find, collect, and aggregrate all the related matter. In fact, he shows a behaviour, which is typical for an autist. But this is only a theoretically possibility and not a reality, and therefore it is no wonder that due to his autodidactics or self-teaching, the attempt of realisation was a disaster, if it is possible at all in its ideal version, and consequently the conceptualization had profound issues and the implementation failed.
    Furthermore, most of our criticisms, point of views, and explanations in relation to such metaphysical and cybernetical ideas, concepts, or systems are correct respectively remain intact.
    In relation to the Arrow System at least the following fields, concepts, and works exist as foundation:

  • cybernetics Güther with kenogrammatics, proemiality, and polycontexturality, including
    • PolyContextural Logic (PCL) or Subjective Logic,
  • Model Theory (MT) Robinson with and Robinson diagram,
  • Relational Algebra and Tarski with Calculus for Relations or Relational Calculus,
    • Binary-Relational Algebra (BRA) and Binary-Relational Model (BRM),
  • Classical Logics,
  • Algebraic Logics,
  • Tarski with Cylindric Algebras, generalized to the case of Many-sorted Logic in 2006,
  • Dynamic Logics, including
    • Dynamic Predicate Logic,
  • Modal Logics, including
    • Many-dimensional Modal Logic, including
      • Arrow Logic, including
        • Many-dimensional Arrow Logic or Arrow Logic II, and
        • Dynamic Arrow Logic,
    • Cylindric Modal Logic,
    • Dynamic Modal Logic,
    • Modal Transition Logics, including
      • Propositional Dynamic Logic (PDL),
  • formal context, or formalized contextual dependence, or context transcendence formalization, or context as formal object, or Common Sense Computing (CSC), though still incomplete and tentative in the year 2012 due to the usual metaphysical issue, here expressed as the impossibility to define an absolute outermost context, because all sentences are context dependent, others discuss it as symbol grounding problem.
  • and other fields.

    Eventually, one has to ask if even the reflective Arrow Logic and dynamic ontology included in the Arrow System are original and unique expressions of ideas.

    Howsoever, our masterpieces are much more ingenious, sophisticated, and elaborated, and therefore stealing them always fails.
    Indeed, all of this and much more was already the source of inspiration and foundation of our Evoos, inclusive the subprojects of the TUNES project, such as the Arrow System and the Distributed operating system (Dos) TUNES OS, and also the Dos Aperion (Apertos (Muse)) of the company Sony.

    In the end, only quite a lot of chaos has been created, specifically in relation to cybernetics, ontology, Symbolic Artificial Intelligence or Artificial Intelligence I (AI 1), Distributed Computing (DC) and Distributed System (DS), Agent-Based System (ABS), foundation of the Semantic (World Wide) Web (SWWW), and other foundations and fields, which has to be reviewed and analyzed in detail.
    For example,

  • "Reflection, Non-Determinism and the λ-Calculus" is one of the guiding works of the TUNES project, but deterministic chaos is not,
  • self-similarity is mentioned, but the implication of self-similarity respectively the fractal structure as grounding is not drawn, which is related to the former list point,
  • homoiconicity is discussed and some kind of agent system seems to be suggested, but
    • on the one hand a Holonic Agent System (HAS) is not discussed and
    • on the other hand only State Machines (SMs) and Transition Systems (TSs) are discussed instead of agents, but also rewriting systems and directed graphs, which coincide mathematically with TSs, but not with SMs,
  • "describes contexts as identifying agents (i mean all the aspects of agents that you would want to apply) with structures of ongologies [ontologies] called "ontology frames"[, which] are basically collections of nodes in an ontology graph, overlaid by a structure that i haven't looked into yet. the idea is that a context has a boundary, and that interpeting information from the outside [respectively outer context] of it requires some translation process from an exterior ontology [respectively outer context] to one of its own ontologies (represented by an arrow)",
  • "agent concept is just a way of talking about the information stores that communicate via ontologies",
  • "ontology [...] is an information filter",
  • "user context vocabulary (called an ontology)",
  • "[...] the draft [...] describes contexts as identifying agents (i mean all the aspects of agents [or information stores that communicate via ontologies [or [information filters] or [user context vocabulary]] that you would want to apply) with structures of on[t]ologies called "ontology frames". the frames are basically collections of nodes in an ontology graph, overlaid by a structure that i haven't looked into yet. the idea is that a context has a boundary, and that interpeting information from the outside of it requires some translation process from an exterior ontology to one of its own ontologies (represented by an arrow)."
  • "in the paper, i used the example of [High-Level Languages (]HLLs[)] as ontologies in reflective programming systems to explain how the arow system (or any cybernetic useful information system) should architecturally differ from the ordinary type of system."
  • ontology and XML are mentioned, but something like a SWWW is not discussed, and
  • ontology system, dynamic ontology, and computing ontology are discussed, but eventually only in the sense of an Informationn System (IS), and Graph-Based Knowledge-Based System (GBKBS) or Knowledge Graph-Based System (KGBS).

    But luckily, a lot of order has been preserved as well, specifically in relation to bionics, Cognitive Computing (CogC), and other foundations and fields, because they were discussed only after the publication of our Evoos.

    The latter also supports our claim that entities knew about our activities and works at that time. Since around the summer of the year 1998 C.S. has a massive scientific and economic entourage.

    We conclude with the recall that in C.S.' follow-up part of the novel titled "The Old Man and the Sea" written by Ernest Hemingway the sharks need the remaing (endo)skeleton of the marlin fish, which therefore has not lost its price even without meat on the bone, and the reference to the Comment of the Day #1 of the 8th of January 2020.


    29.June.2022

    11:23 and 17:36 UTC+2
    Grand revision

    All the years, we were not in the mood to look at the fraud, which happened around the years 1998 to 2002. We only knew that there was the usual fraud, like we have investigated and documented in relation to the fraud done later.
    We also have not looked at the mailing list of the TUNES project, but we do not know if we have not found it in the past or just ignored it around the year 1999 in the wrong assumption, that everything was said in the publicated webpages and documents of this project.

    After we have now worked through all the fields and works related to

  • Virtual Machine (VM), operating system-level virtualization or containerization,
  • Software-Defined Networking (SDN),
  • Service-Oriented technologies (SOx),
  • Space-Based technologies (SBx),
  • Agent-Based System (ABS) and Multi-Agent System (MAS)
  • Cognitive Computing (CogC), Cognition and Affect (CogAff) architecture (CogAffA) for Cognitive and Affective Computing (CogAffC) and Cognitive and Affective System (CogAffS), Cognitive-Affective Processing System (CAPS), Cognitive Agent System (CAS),
  • resilience,
  • TUNES project

    we got a more complete and better view on them, and also what is wrong and what is right of our ... publications.

    We already said, that our

  • activities at that time were much more inspiring and moving the interested mass than we have ever expected or even could imagine before,
  • Evoos is the only work, which integrates all these fields and works somehow at the same time, and
  • OS is the only work, which still exists, even is able to realize all the visions, ideas, concepts, designs, technologies, goods, and services, and even puts something totally new on top of the heap making it a whole range and not just a mountain.

    In the moment we translate and evaluate the statements made in the emails of the TUNES mailing list and also compare and classify these contents with the other fields and works.
    Specifically interesting is our resurrection of the field of ontology from dead projects of the DARPA and the U.S.Army, and the answer to the question if the Semantic (World Wide) Web (SWWW) is also based on the TUNES project.
    Indeed, the SWWW is based on the so-called semantic triple, or Resource Description Framework (RDF) triple, or simply triple, which

  • is the atomic data entity in the RDF data model and
  • is based on a Binary-Relational Model (BRM), and hence on the Binary-Relational Algebra (BRA), and also (related to) the Arrow Logic, and a graph model.

    Bingo.

    Only later, the entities related to the SWWW found out that there is a certain problem with common sense and polycontexturality, specifically when the Linked Data (LD) concept and Dynamic Semantic Web (DSW) were added around the year 2005, as we worked out in the Clarification of the 8th of May 2022. But the TUNES project with the Arrow System and our Evoos were already at this point and also with all the rest.

    Our Evoos added the fields of

  • operating system-level virtualization or containerization,
  • Artificial Life (AL),
  • rest of HardBionics (HB) and SoftBionics (SB) (e.g. ML, Computational Intelligence (CI) and Soft Computing (SC), ANN, CV, EC, SI or SC),
  • Holonic Agent System (HAS),
  • Cognitive Agent System (CAS) (see for example Cooperative Man Machine Architectures - Multiagent Planning and Scheduling (CoMMA-MAPS) and Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents (CoMMA-COGs) based on hybrid and layered InteRRaP (social planning, local planning, and behavior-based control)),
  • physics,
  • Model-Based Autonomous System (MBAS) or Immobile Robotic System (ImRS or Immobot),
  • rest of realities,
  • CogS,
  • layered CogS,
  • Cognition and Affection (CogAffC) archtecture (e.g. Emotive Computing (EmoC) and Affective Computing (AffC)), and
  • panalogy (e.g. Emotion Machine).

    Through

  • operating system (e.g. UNIX), specifically Inter-Process Communication (IPC) of os and also IPC of MAS, and
  • CoMMA-COGs

    our Evoos also adds

  • Resource-Oriented Computing (ROC).

    Through

  • ROC, or
  • operating system-level virtualization or containerization, and
  • IPC

    our Evoos also adds

  • microService-Oriented Architecture (mSOA).

    Through

  • multimodality,
  • Virtual Environment (VE), including Social Interaction Framework for Virtual Worlds (SIF-VW) and Cooperative Man Machine Architectures - Cognitive Architecture for Social Agents (CoMMA-COGs), and
  • inference and proof on the basis of Agent Chameleons and NEXUS (there is no other prior art, which is based on agent mobility, evolution, and morphogenesis, and also assigns real objects to virtual objects in the sense of a fusion or realities)

    our Evoos also adds

  • AR and AV to the one set of prior art and
  • VR to the other set of prior art,

    and thus the whole Mixed Reality (MR), and eXtended Mixed Reality (XMR) or eXtended Reality (XR) spectra.

    Through

  • CogAff, and
  • MBAS or Immobot

    our Evoos also adds

  • Ubiquitous Computing (UbiC) and Internet of Everything (IoT), and also Cyber-Physical System (CPS), Intelligent Environment (IE), and also Holonic Manufacturing System (HMS), and so on

    to all prior art, which lacks one or more of these fields.

    At this point one can already see that it does not matter at all anymore, what has happened in the period from 1998 to 1999, because these creations, extensions, and further developments are already sufficient to demand proper citation with attribution.
    And then came our OS with a lot more.

    So or so, the Peer-to-Peer (P2P) Virtual Machine (VM) (P2P VM) Askemos always came too late, as is the case with for example Bitcoin and Ethereum.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer