Home → News 2020 January
 
 
News 2020 January
   
 

01.January.2020

New Year 2020

The OntomaX team wishs our friends, supporters and fans a happy new year.

Ontonics Further steps

Based on the final assesment in relation to the fields of Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES) respectively the related parts of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) with our Ontologic Net of Things (ONoT), Ontologic Web of Things (OWoT), and Ontologic uniVerse of Things (OVoT) we had to make a revision of our

  • evaluation of the enterprise value of our corporation to 160 trillion or 160,000 billion USD and
  • offers for the business units and companies given in the Further steps of the

    (see also the Further steps of the 2nd of December 2019).
    We do apologize for any confusions.

    We offer the corrected true enterprise values, which we set to 50% of the reported wrong enterprise values, and even add the related shares based on the ratio between the estimated enterprise value of 160 trillion or 160,000 billion USD of our corporation and the corrected enterprise values of these companies:

  • IBM 87.5 bn USD plus 0.0546875%, though our latest offer is based on 30% of the reported wrong enterprise values (52.5 bn USD)
  • Alphabet (Google) 375 bn USD plus 0.234375%, though our latest offer is $1 inclusive all waivers of us and the rejection of any claims of shareholders
  • Microsoft 500 bn USD plus 0.3125%, though our latest offer is based on 30% of the reported wrong enterprise values (333 bn USD) (see below)
  • SAP 87.5 bn USD plus 0.0546875%
  • Atos 8.25 bn USD plus 0.00515625%
  • Amazon Web Services 60 bn USD plus 0.0375%

    We consider these offers as made under Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions, and our last offers, which will not be improved.

    Furthermore, we concluded that the estimation of the enterprise value of the company Microsoft is much too high and must be corrected to 30% (333 bn USD) or even less, because our Society for Ontological Performance and Reproduction (SOPR) has blacklisted it, which means it

  • has no legal certainty in relation to our ArtWorks (AWs) and further Intellectual Properties (IPs)

    (see also the Further steps of the 4th of November 2019).

    04:19 and 16:19 UTC+1
    SOPR #262

    *** Work in progress - complete wording missing ***
    Topics

  • Legal matter
  • Exclusion of Microsoft
  • Exclusion of Cisco Systems
  • Exclusion of Renault-Nissan-Mitsubishi Alliance
  • Further steps

    Legal matter
    We are absolutely determined, as is the case with the prosecutors and market regulators in North America, Europe, and other places. Now we are marching to get our properties back and damages compensated, as announced in the past multiple times. Please take it very, very, very seriously from now on. Fraudsters are playing with their freedom.

    Exclusion of Microsoft
    We have blacklist the company Microsoft in relation to a second issue at first, but then concluded that the reason was in the limit of the provisions of the Articles of Association (AoA) and the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR) and withdraw the blacklisting in the issue ... of the ... again.
    But somehow we have not considered a first issue, which happened in the month March 2019 and is related to the vehicle manufacturers Renault, Nissan, and Mitsubishi, and is on the same level with that second issue.
    Indeed, we have given allowance to build up systems, platforms, applications, devices, services, etc., but as we said

  • on the one hand in the issue ... of the ... they would have to be handed over to our SOPR if demanded by the provisions included in the AoA and the ToS, and
  • on the other hand in the issue ... of the ... all these provisions were made under wrong assumptions by us and therefore void from the legal point of view.

    We do not want to make these provisions void but keep our word, but Microsoft also made clear in several other cases unrelated to our SOPR that it requires a judgement by a court in such a case, or said in other words, we have to make the assumption that it has already signaled to not sign our agreement at its first submission. Eventually, this shows how the management of Microsoft is thinking and acting, and where it has crossed the white, yellow, or red line finally and that our provisions seem not to be relevant at all.
    Taken all together makes the blacklisiting conclusive and therefore, we had to blacklist the company Microsoft, because it is actively and effectively disturbing the goals and even threatening the integrity of our SOPR by its overall strategy.

    Exclusion of Cisco Systems
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Cisco Systems to question our competences, disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of Cisco Systems, including joint ventures with for example the company Rakuten.

    Exclusion of Renault-Nissan-Mitsubishi Alliance
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Renault, Nissan, and Mitsubishi, as well as Fiat and Chrysler to question our competences, disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of the Renault-Nissan-Mitsubishi Alliance, including Renault, Nissan, Mitsubishi, but also Fiat and Chrysler.

    Further steps
    We do not talk about phase 4 at this time.

    We have to give the friendly recommendation to become serious and real about the legal situation, finally. We will take back all of our properties in the next month. Promised, too.

    King Smiley Further steps

    We learned that the reconstruction of the Palais des Tuileries would only cost around 350 million Euro, which is a little less than one-sixth of the allocated budget of 2 billion Euro.
    In addition, we added further 6 billion Euro to the overall intended budget and reserved in sum 8 billion Euro for the extension of the wings of the Louvre (see the Further steps of the 30th of December 2019).
    Instead of the extension of the wings of the Louvre we also played with the idea to only construct the Tour des Tuileries==Tuileries Tower or Tour du Louvre==Louvre Tower as some kind of a projection of the Jardin des Tuileries into the third dimension and calculated with 750 million Euro for the construction of the tower and 4 billion Euro for the purchase of the plot, if we have to do the latter at all (see the Further steps of the 31st of December 2019).
    Nevertheless, we would have some money left. Luckily, we also found 4 alternative areas respectively 4 more areas for nice constructions, where the buildings meant for the extended wings of the Louvre could be placed.

    Howsoever, we are sure that our budget will be sufficiently convincing to make our wishs come true. N'est-ce pas?


    02.January.2020

    03:52 and 16:23 UTC+1
    SOPR #263

    *** Work in progress - better oder and wording, maybe one or more arguments missing ***
    Topic

    This issue is a special related to the field of Smart Urban System (SUS) about the topic:

  • Digital rights.

    Digital rights
    We have observed the developments worldwide and came to the conclusion that it is time to tell what is fact, what is fiction, and what is wishfull thinking by scientiests, urban development planers, and other persons in relation to our Ontologic System (OS), Ontoscope (Os), Ontologic Applications and Ontologic Services (OAOS), and so on.

    Cities, like for example

  • Barcelona, Kingdom of Spain,
  • Bordeaux, French Republic,
  • Florence, Italian Republic,
  • Edinburgh and Manchester, both United Kingdom, and also
  • Toronto, Canada, and
  • Seattle, U.S.A.,

    want to develop Smart Urban Systems (SUSs), including smart cities. In the course of this, they also want to guard citizen's data, that is something becoming more of an issue as cities collect data via sensors, CCTV cameras, and even telecom networks. "Under a plan initiated with other cities including Bordeaux, Edinburgh, Florence and Manchester, Barcelona is determined that citizen data, which is defined as personal or non-personal information generated in the digital public sphere - should be recognised as a public and individual asset and should be used solely in the public interest.
    "We believe that technology has to be at the service of citizens to improve the quality of life in cities and not to create digital exclusion, said the city's commissioner for digital innovation [...].
    "Smart doesn't just come from the intelligence provided by the technology but also from the citizens, their experience, their knowledge which can be gathered to make better public decisions."
    [...]
    "We need to explain how we collect it, what we collect and what we are going to do with it," he said."

    But there are these small but fine problems:

  • We own the digital rights inclusive all raw signals and data of smart cities managed and operated on the basis of our OS respectively hooked into the infrastructure of our Society for Ontological Performance and Reproduction (SOPR) respectively in our
    • Ontologic Net (ON), which is the successor of the Internet,
    • Ontologic Web (OW), which is the successor of the World Wide Web (WWW) and the Semantic (World Wide) Web (SWWW), and
    • Ontologic uniVerse (OV), or Ontoverse, which is the successor of the reality, also called the New Reality (NR),
  • The fields of Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES), including the
    • Semantic Sensor Network (SSN),
    • Web of Things (WoT),
    • Semantic Web of Things (SWoT),
    • Semantic Sensor Web (SSW),
    • etc.

    are included in the related parts of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) with our Ontologic Net of Things (ONoT), Ontologic Web of Things (OWoT), and Ontologic uniVerse of Things (OVoT).

  • In a democratic state or at least genuine constitutional state it is not possible to expropriate a living artists from her or his work of art in whole or in part in general, and to expropriate C.S. from the work of art titled OS in whole or in part in particular. Please become serious and obey reality and the law, finally.
  • It will become virtually impossible to find a
    • Communications Service Provider (CSP or ComSP), specifically
      • Telecommunications Service Providers (TSPs) and
      • Internet Service Provider (ISP),
    • Web Service Provider (WSP), or
    • Cloud Service Provider (CSP our ClSP),

    that does not provide services based on our OS, specifically on our

    • Ontologic System Components (OSC),
    • Ontoscope Components (OsC), and
    • Ontologic Applications and Ontologic Services (OAOS).
  • Scientific institutes cannot help in this case, even not the universities in California and Massachusetts, both U.S.A., including that project of Berners-Lee
    • sponsored by Alphabet (Google), Microsoft, Amazon, Samsung, and Co., and
    • supported by governments, that have no right to make decisions, which
      • are arbitrary and capricious,
      • infringe the rights of C.S. and our corporation, and
      • even harm democracy.

    In this context, we often recall the following result of our very thoroughly conducted considerations:
    No entity, even not a government or a state union, has the right to make decisions how we handle signals and data in our legal scope, domain, or sovereign space besides the laws, acts, and regulations, as well as agreements concerning data privacy and digital rights being effective.
    If the European Commission (EC) makes mandatory a part of our OS or our Os or both is installed in new vehicles, then it always has to

  • ask us for allowance at first and also
  • comply with our terms and conditions for using our property rights as well,

    or otherwise it would have to formally expropriate C.S. from the related ArtWorks (AWs) and further Intellectual Properties (IPs) officially, and pay a reasonable and customary compensation either nonrecurring or recurring as long as our properties are utilized, though that will not become effective because such an expropriation is not possible in a democracy.

    This is no matter of debate and even should never have been the matter of debate at all.
    Therefore, If a smart city wants to use our properties, including our original and unique, iconic ArtWorks (AWs), then it is in the public interest that said city complies with the AoA and the ToS with the LM of our SOPR, which implies that we

  • get royalties,
  • get raw signals and data in any case, and
  • exploit our right to handle the signals and data respectively our digital properties in the limits of laws, acts, and regulations, as well as agreements concerning data privacy and digital rights as we want to.

    So any radical plan, revolutionary language and statements, and so on are not appropriated and lead to nowhere, because cities are not above the law. :)

    issue SOPR #255 of the 3rd of December 2019: But we are already absolutely sure that this will not change the

  • legal situation in relation to our rights, specifically, our moral right and our copyright regarding our original and unique, iconic, and industry standards defining work of art titled Ontologic System and created by C.S., and
  • ways in relation to our exploitation of the oeuvre of C.S., including the
    • Ontologic System (OS) ArtWork (AW) rights,
    • other Intellectual Property (IP) rights,
    • other property rights given by the
      • legal scope of our digital rights, digital interest, digital property, or digital estate,
      • legal scope of our Ontologic System (OS),
      • domain of our New Reality (NR) respectively
      • sovereign space of our OntoVerse (OV), also known as OntoLand (OL),

    specifically for handling signals and data.
    The raw signals and data will be

  • passed onto our SOPR,
  • stored if required or found to be reasonable,
  • processed in the core of its infrastructure,
  • processed and transformed into information and knowledge, and
  • made available on the Marketplace for Everything (MfE) of our SOPR

    in a legal way.

    It is even more in the interest of the public, because it is providing benefit for the public, that

  • the cities comply with the laws of their own states and
  • their citizen signals, data, and information go through the core of the infrastructure of our SOPR.

    This does not eliminate the right of cities and their citizens to opt-out from selling their personal signals and data on the MfE of our SOPR. But it also does not prohibit them or the SOPR to sell their personal signals and data on the MfE after they were anonymised in a transparent and monitored way.

    See the issues SOPR #244 of the 4th of November 2019, #251 of the 19th of November 2019, SOPR #256 of the 3rd of December 2019, and #261 of the 28th of December 2019, and also the note Laws contradictory of the 11th of November 2019 and the Ontonics Further steps of the 7th of December 2019.

    King Smiley Further steps

    When working on our estate projects in the town Paris, F.R. (see the Further steps of the 30th of December 2019, 31st of December 2019, and 1st of January 2020), then we concluded that a(n observation) tower would be nice for the town New York CIty, U.S.A., as well. But in our opinion it would have to be a really big respectively high tower. Consequently, we called it 1K for the moment (1K for 1 kilometer or 1,000 meters) or higher if required for prestige.
    So the question is where it can be constructed. Our suggestion is to improve the ugly design of the Central Park, specifically one of the areas Play Ground and The Green, and The Great Lawn, and this decommissioned and also for other reasons useless Croton Reservoir or Jacqueline Kennedy Onassis Reservoir would be ideal.
    Even better, variants of our architectures allow the integration of the already existing features.


    04.January.2020

    King Smiley Further steps

    Since quite some time, we are working on a project that was the private real estate of C.S. at first, but somehow developed into

  • a series of hotels and at least 7 resorts in the
    • U.S.A.,
    • French Republic,
    • Republic of Italy,
    • Hellenic Republic, also known as Greece, and
    • elsewhere,

    and even

  • an establishment operator and business unit of our business unit King Smiley of our Hightech Office Ontonics.

    In this relation, some clever entities assumed that we are building up a cruise line. But this assumption based on espionage is wrong. The facts are that

  • related activities are only related to the construction of the private yacht of C.S. and
  • our holidaymakers do fly with our fleet of cruise airships. :D


    05.January.2020

    19:20 UTC+1
    Clarification

    *** Work in progress - links missing to former besys comments ***
    "The judge [Robin Postle] ruled that ethical vegans should be entitled to similar legal protections in British workplaces as those who hold religious beliefs. [...]
    "Religion or belief" is one of nine "protected characteristics" covered by the Equality Act 2010.
    The judge Robin Postle ruled that ethical veganism qualifies as a philosophical belief under the Equality Act 2010 by satisfying several tests - including that it is worthy of respect in a democratic society, not incompatible with human dignity and not conflicting with the fundamental rights of others.
    At the tribunal [...], the judge said in his ruling that ethical veganism was "important" and "worthy" of respect in a democratic society."

    We often explained that our work of art titled Ontologic System and created by C.S. is also some kind of a belief system and due to its foundation on ontology this belief system is based at least on a philosophical belief, logically. So it is compatible with religions and other beliefs.
    But we also mentioned that the ontological argument or ontological proof is a known practice of proposing in philosophy (of religion).

    Eventually, our copyright has been confirmed once again.

    07:51, 08:22, 18:55, and 26:20 UTC+1
    SOPR #264

    *** Sketching mode - Work in progress ***
    Topics

  • Legal matter
  • Infrastructure
  • New SOPR members
  • Further steps

    Legal matter
    The copyright for the work of art titled Ontologic System and created by C.S., including our Ontologic System Components (OSC), Ontoscope Components (OsC), and Ontologic Applications and Ontologic Services (OAOS), as well as any with other works of art included in the oeuvre of C.S. has been confirmed once again on the statute of philosophical belief (see the Clarification of today).

    Furthermore, the copyright law says that any modification of a work of art is only allowed by the creator of it. If a modification of a work of art is required for legal reason but rejected by the creator, then the creator is given the option to buy (back) said work of art to be modified.

  • 1. We have not sold anything.
  • 2. We only give the allowance for any modification of our Ontologic System under our Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions as regulated with the Articles of Association (AoA) and the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR).
  • 3. We intend to get all of our ArtWorks (AWs) and further Intellectual Properties (IPs) back, at least under our power of control.
  • 4. We will only pay the cost price of an illegal plagiarism of one of our AWs and IPs if a takeover price is not already covered by a damage compensation or a royalty being due.
  • 5. We ask for no purchase restriction for company shares.

    We also got more evidences that confirm even more the

  • various allegations of us in relation to illegal actions conducted by external entities and
  • legal standing claimed by us.

    We also have to recall and point out once again, that members of our SOPR have to acknowledge our rights unreservedly. This implies that related Free and Open Source Hardware and Software (FOSHS) provide no legal certainty.

    Infrastructure
    The guideline in respect to technology, goods, and services is roughly drawn up in the following way:

  • SOPR original and unique Ontologic System, specifically (operational and interoperable) infrastructure vs. SOPR members individual platforms, goods, and services, and
  • interoperability.

    Please note that a fine line cannot be drawn in all cases but if required, then a regulation has to be found in practice.
    Please

  • add interfaces for our access to the raw signals and data into their software stacks in addition to
    • register their services in the common service index and
    • provide interfaces for their services.
  • remove infringing Free and Open Source Software (FOSS) out of your software stacks, specifically unified architectures and frameworks, As Soon As Possible (ASAP) Or Even Better Immediately (OEBI).

    Or we will do it in addition to something else.
    These points are neither negotiable nor avoidable.

    In addition, we would like to give the recommendation to come to terms with our SOPR and its infrastructure and our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), including or being the fields of Grid, Cloud, Edge, and Fog Computing (GCEFC), Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES), including Industrial Internet of Things (IIoT) and Industry 4.0 and 5.0, and Big Data Processing (BDP) of the first and second generations, which

  • are the systems and platforms in which subsystems, platforms, and other systems have to be hooked into and
  • provide services in these fields as well also for taking back IPs, control, operation, leadership, momentum, etc..

    Howsoever, we are sure our opinion on the matter has become clear already.

    New SOPR members
    We have new members of our SOPR with the company Splunk and the subsidiary Hitachi→Hitachi Vantara besides around 3,000 more new members roughly estimated. This is quite huge.

    Further steps
    We have focused on Grid, Cloud, Edge, and Fog Computing Systems (GCEFCSs), and Cyber-Physical Systems (CPSs), Internet of Things (IoT), and Networked Embedded Systems (NESs), but noticed that so many developments in the fields of

  • Business Intelligence (BI), Visualization, and Analytics (BIVA),
  • Data Science and Analytics (DSA),
  • Big Data Fusion (BDF),
  • Big Data Processing (BDP),
  • Big Data Analytics (BDA), and
  • other fields

    are already based on our OS as well and their economic systems have become huge.
    This requires a reflection of our activities and decisions and potentially an adaption of one or more activities and a rejection of one or more decisions.

    Oh, this is already very big and even is becoming much bigger with every moment.

    Thank you very much.


    06.January.2020

    Investigations::Multimedia, AI and KM

    Samsung
    After the company presented essential parts of our OntoBot and OntoScope software components, which constitutes one of its many infringements of our rights, it even presented an essential part of the cybernetic self-reflection, cybernetic self-image, or cybernetic self-portrait of C.S. as well as our integration of the fields of SoftBionics (SB), and also Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES) respectively the related parts of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) as its own vision.
    We quote a related report publicated by a media company, that does know exactly since many years that our work of art titled Ontologic System and created by C.S. is the original of both Bixby and Neon: "The company on [the 24th of December 2019] [said] that "contrary to some news, NEON is NOT about Bixby, or anything you have seen before.
    [...]
    Samsung, meanwhile, believes we're starting a new era of technology, something Kim dubbed "the Age of Experience" in a blog post Thursday. The key for this new era, he said, is giving users "personalized technology."
    "The devices you use will understand you as an individual, blurring the boundaries between the digital and physical worlds, and changing the way you interact with your cities and communities," he wrote.
    And "instead of changing your routine to incorporate more devices, your devices will work seamlessly for you," Kim added. "Just imagine how much more you could accomplish with an intelligent companion that supports you, instantly reacting to your needs."
    For Samsung, that starts in 2020. While CES isn't a big mobile show, some of the big bets the company took last year will be key in its efforts to make our devices smarter. That includes 5G, the super-fast mobile technology that launched in 2019. While 5G had some hiccups last year, it's expected to become mainstream in 2020.
    At the same time, Samsung believes that its efforts in AI and the internet of things will help it "lead in this Age of Experience," according to Kim.
    "At Samsung, we see a future of opportunity," he wrote. "With the emergence of AI and IoT, finally enabled by the power of 5G, the start of 2020 marks a moment where the realization of our vision for a[n] intelligently connected world becomes a reality.""

    In addition, we got the information about patents that suggest Neon may involve computer-generated humans for use in Augmented Reality (AR) content.

    Obviously, the company has copied the part of our Ontologic System (OS), which could be described as a synthetic, softbionic, anthropomorphic, and lifelike character.
    A quick lock on the webpages

  • Overview, specifically its sections
  • Caliber/Calibre,
  • Ontologic Applications,
  • Links to Software, specifically its sections

    and

  • Links to Hardware, specifically its section

    of the website of our OS OntoLinux proves doubtlessly and definitely that our OS is the original and unique work of art indeed, which Samsung has copied and also presented in an illegal way by claiming to be the creator of our idea, vision, and expression of both with the OS and misleading the public about the true origin of our accomplishment in this way respectively that NEON is NOT a creation of Samsung, but merely an implementation of the related parts and their integrations of our OS.
    Also note the relation between, as well as our match and integration of the fields of

  • Augmented Reality (AR) and Synthetic Reality (SR or SynR), and
  • synthetic reality and synthetic character,

    which also are relevant are relevant in connection with our rights and their infringement by Samsung.

    We will act accordingly and as clearly communicated for ensuring fairness and trustworthiness.
    In addition, the fact that patents do exist that do not mention our prior, original and unique, iconic work of art titled Ontologic System


    08.January.2020

    Comment of the Day #1

    "I am a proven creator of a reality. I am not a new species.", [C.S., Today]

    Comment of the Day #2

    "2005: We are into something. 2020: We are onto something.", [C.S., Today]


    09.January.2020

    Ontonics Further steps

    We had once again the situation that we have not seen the forest due to the trees. But the initial plan still is that our business unit Ontologics also provides the technologies, goods, and services for our Ontologic System (OS) with its Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV). This includes also the so-called Grid, Cloud, Edge, and Fog Computing Systems (GCEFCSs), and related infrstructures and platforms.

    In the last past, we also made a list of other takeover candidates and we are sure that their owners, like for example the companies Microsoft, Volkswagen, and Qualcomm, will be happy to sell them to us for social, political, legal, technological, and economical reasons, if they are not purchased by us anyway for the same reasons. Is not it? :)
    Our offers will be the investments and prime costs in most of the takeovers, but not more than 100%, because our arguments are just too good.

    Total success for our OS created by C.S.

    We quote a report about a relatively new industry forum: "[...]
    "Digital transformation and the growth of data is driving an infrastructure build out that will dwarf the first era [and second generation] of the cloud defined by hyperscale data centers [...]. We are using information technology now in completely new ways that demand we move, store and process data even faster, more securely and often closer to the person using the service. The combination of superfast networking and pervasive high-performance computing - the edge infrastructure to deliver smart services anywhere, anytime - can only be achieved with a profoundly new mindset shared across a global ecosystem.
    [...] A vision of this magnitude can only be achieved with global leaders across industries.
    [...] We will bring our leading R&D expertise to foster the [...] revolution and unlock new technologies to ultimately enable a smart world, where technology becomes so 'natural' that people are unaware of its presence. [...]
    [...]
    The forum will also focus on use cases and best practices for "smart world" applications, and enabling technologies such as digital twin computing, which is a computing paradigm that enables humans and things in the real world to be re-created and interact without restrictions in cyberspace; R&D for human behavior and society modeling; large-scale simulations and next generation user interface and user experience device technologies.
    Next generation communications hold the promise for improving many aspects of life, including remote healthcare, disaster prevention, education, automated driving, sports and entertainment and industrial manufacturing."

    What a total success never seen before:

  • This superfast networking and pervasive high-performance computing is an essential part of our Ontologic Net (ON), which is the successor of the Internet,
  • this edge infrastructure for smart services is an essential part of our Ontologic Web (OW), which is the successor of the World Wide Web (WWW) and the Semantic (World Wide) Web (SWWW),
  • this 'natural' technology is an essenial part of our Calibre/Caliber,
  • this smart world is the one essential half part of our Ontologic uniVerse (OV), or Ontoverse, which is the successor of the reality, also called the New Reality (NR),
  • this global ecosystem is an essential part of our Ontologic Economic System (OES), and
  • this vision of this magnitude with all these parts and much more is an original and unique achievement of our OntoLab, The Lab of Visions, and has been expressed with our iconic work of art titled Ontologic System and created by C.S..

    01:11, 07:00, 09:14, 11:07, 14:04, 22:57 UTC+1
    SOPR #265

    *** Work in progress - some links missing ***
    Topics

    The action is heating up and therefore we continue with keeping the pack on track in relation to the following topics:

  • Legal matter
  • Infrastructure

    Legal matter
    The fields of

  • Distribued System (DS),
  • Grid, Cloud, Edge, and Fog Computing System (GCEFCS), and
  • Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES),

    resepectively the foundations of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV)

    have become the main areas for disturbing the goals and even threatening the integrity of our SOPR.
    Of course, our SOPR, like every other of our managing and collecting societies, wants to protect its goals, its Ontologic Economic System (OES), its integrity, and its union (following the European Commission (EC)). Therefore, there can be no compromis that we need to act decisively to make sure that this permanent attack, that companies are conducting actively, is disrupted (following the U.S.American administration).

    Luckily, in our wise foresight we have already formulated and revised the AoA and the ToS with the LM of our SOPR, as can be easily seen and understood by the following activities:

  • In the issue ... of the ... we already discussed the matter from a general point of view.
  • In the issue ... of the ... we decided for a revision of the AoA and the ToS, and announced further revisions if required and reasonable.
  • In the issue ... of the ... we introduced the provision to register services and provide interfaces for them.
  • In the issue ... of the ... we already withdraw the option for subON, subOW, and subOV.
  • In the issue ... of the ... we explained that we are willing to modify our OS to fulfill the demands and requirements of the public, specifically freedom of choice, innovation, and competition, and also comfort, safety and securtiy, as well as privacy pro bono publico==for the public good.
  • In the issue #264 of the 5th of January 2020 we made once again clear that we have the rights to demand the reproduction and the performance of our OS without any modification in addition to be named.
  • In the issue #223 of the 24th of August 2019 we introduced movements, design elements, frameworks, and standards to fulfill the other demands and requirements of the public, like for example interoperability.
  • In the issues #248 of the 12th of November 2019 we said that we have an own infrastructure.
  • In the issue #260 of the 26th of December 2019 we said that voice-based system, cloud computing system are included in the infrastructure of our SOPR already, and no other Distributed System (DS), voice-based system, and virtual assistant or Intelligent Personal Assistant (IPA), as well as Cyber-Physical System (CPS), Internet of Things (IoT), and Networked Embedded System (NES) is required to provide ... services ...
  • In the issue #264 of the 5th of January 2020 we gave a corresponding guideline and mentioned that the guideline would not be sufficient to draw a clear line, because Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) theoretically allows Everything as a Service (EaaS), including all kinds of Technology as a Service (TaaS) and Service as a Service (SaaS), eventually including the infrastructure of our SOPR and our ON, OW, and OV, and finally our OS.
  • In this issue we make clear that in general it is not required at all that other entities provide Grid, Cloud, Edge, and Fog Computing (GCEFC) technologies to fulfill the demands and requirements of the public for the benefit of the public, because they can already be achieved when we do not modify the OS in such a way, but our SOPR keeps the power of control and management over GCECF technologies.

    Eventuallly, we have to make clear that the fields of GCEFC, TaaS, and SaaS utilized for communications services in civil or private or both areas and for related purposes, specifically telecommunications services and Internet services, including connectivity services, are viewed and considered by others and us as parts of the infrastructure of our SOPR exclusively kept under power of control and managed by our SOPR. This also covers operating common communications services by utilizing a GCEFCS, TaaS, and SaaS.
    Please note

  • the difference between
    • cloud computing mobile computing networking, also called by others cloud native or cloud-based (4G, 5G, xG) networking, carrier-grade telecommunications service provider cloud (telco cloud), and so on, and
    • a provision of connectivity between a cloud and a thing (same with the rest of GCEFCS, TaaS, and SaaS),
  • manufacturers of hardware, vehicles, and other technologies and goods, and providers of contents and other services are not restricted in our OES for freedom of choice, invention, and competition pro bono publico, and
  • manufacturers of related hardware, and also Communications Service Providers (CSPs), specifically Telecommunications Service Providers (TSPs or telcos) / telecommunications companies (telcos) and Internet Service Providers (ISPs), and others still have the important roles of contractors, suppliers, and providers of our SOPR.

    If still required, then we will talk about all use cases in civil or private or both areas and for related purposes in these fields in a future issue.

    In this relation, we would also like to keep up our tradition of making recalls of legal matter and point out once again that the SOPR is the place where migration, composition, and integration is taking place and that we consider all other attempts as disturbing the goals and even threatening the integrity of our SOPR.

    Infrastructure
    In relation to the part of our infrastructure for Smart Urban Systems (SUSs), specifically smart cities, we heard that cities are preparing for the transformation but have no funding for construction and installation. In fact, this is no problem, because we said we will raise it for the public.

    Correspondingly, we have asked our business unit for special tasks in design, architecture, and all the other beautiful things, King Smiley, to develop an overall masterplan for the urban environments and facilities of the infrastructure of our SOPR, specifically the data centers that execute the 1st ring of the IDentity Access and Management System (IDAMS) with the rings and ID spaces as well as the system core, the administration core, and the universal ledger of the infrastructure of our SOPR.
    Thanks to the Hightech Competence of our Hightech Office Ontonics, which is also the legal parent of our managing and collecting societies, we were able to get all elements together, from the single bit over the support of our team of hard-working contractors, suppliers, and providers to the smile on the face of the happy end user.
    This will be huge.

    The details in relation to Very Important Parts and Persons (VIPPs or VIP²s) are confidential matter and will be shared only with federal authorities.
    Indeed, we promised a way for transparent monitoring of the OS core and other parts of the infrastructure by the public, but this does not require to get all informations about the infrastructure. If for example an accredited representative of a public group of interest really wants to visit a facility, then we will organize a journey in a special vehicle having no windows. Our SOPR has nothing to hide.

    In the course of the transition process we will take over business units, subsidiaries, and companies related to GCECF technologies, TaaS and SaaS if of interest.

    One OS


    10.January.2020

    22:11 UTC+1
    Ontonics Further steps

    *** Work in progress - some few better wording ***
    We concluded that the estimation of the enterprise value of the subsidiary Amazon Web Servies (AWS) of the company Amazon is much too high and must be corrected to 30% (36 bn USD) or even less, because our Society for Ontological Performance and Reproduction (SOPR) has blacklisted it, which means it

  • has no legal certainty in relation to our ArtWorks (AWs) and further Intellectual Properties (IPs)

    (see also the Further steps of the 1st of January 2020).

    07:00, 17:55, 22:11, and 22:33 UTC+1
    SOPR #266

    *** Sketching mode ***
    Topics

  • Legal matter
  • License Model (LM)
  • Exclusion of Amazon
  • Exclusion of Samsung
  • Further steps

    Legal matter
    We are considering once again to withdraw our offer of licensing our original and unique works of art due to the reasons that

  • governments have played foul,
  • governments have no legal handle and hence no possibility to expropriate us,
  • antitrust authorities have to show at first that we abuse our market power,
  • freedom of choice, innovation, and competition pro bono public==for the public good and other benefit for the public can be set free by only commissioning contractors, suppliers, and providers,
  • absolutely ridiculous, absurd, and grotesque expectations and demands of potential members of our SOPR are beyond our ability, and
  • too many of potential members of our SOPR intended to disturb the goals and even threaten the integrity of our SOPR and therefore had to be put on our blacklist.

    Because we found out that the government of the F.R.Germany has not cheated alone but together with other member states of the European Union (EU), we restore the grant of one discount on our fees and shares for the country.
    But we also found out that said fraud was and still is much more elaborated. Therefore, we withdraw one discount from all member states of the EU exclusive the country F.R.(?) so that all member states of the EU inclusive the country U.K. have withdrawn 2 discounts (plus 2.50%) on our fixed fees and relative shares according to the License Model (LM).

    Due to various undesirable actions carried out individually and jointly, we withdraw one discount from all vehicle manufacturers included in the licensee class with the designation Industrial non-Information and Communication Technology (ICT) with ICT, which comprises virtually all manufacturers of connected vehicles.

    Due to the latest exclusion of companies, we would like to recall that an exclusion is handled like the situation when our agreement is not signed at its first submission respectively we grant maximal only 6 discounts in such a case.

    License Model (LM)
    {section was added for shocking and might be deleted} We have added two new licensing options to our License Model (LM), and ask for the performance of our Ontologic Applications and Ontologic Services (OAOS) as the royalty a

  • share of 3% of the overall earning or profit generated with the performance of our OAOS for not naming the true origin of our OS, that are C.S., our OntoLab, our Hightech Office Ontonics, and our business unit Ontologics in addition to any other royalty being due and
  • share of 15% of the overall revenue generated with the performance of our OAOS that belong to the fields of Technology as a Service (TaaS), including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), Service as a Service (SaaS), or comparable Ontologic System-level services or metaservices with all 7 discounts.

    In relation to the licensing option for OS-level services or metaservices we will take a closer look for defining its scope, specifically if for example

  • Intelligent Personal Assistant (IPA) not operated on-device respectively on-site on an unconnected device,
  • Business Intelligence (BI), Visualization, and Analytics (BIVA),
  • Data Science and Analytics (DSA),
  • Big Data Fusion (BDF), Big Data Processing (BDP), and Big Data Analytics (BDA) conducted in real-time

    are covered as well.

    Pay the correct fee or sell the platforms or assistants.

    Exclusion of Amazon
    Our SOPR cannot and we will not tolerate any business strategy of that kind as practiced by the company Amazon and other companies in our

  • legal scope of our digital rights, digital interest, digital property, or digital estate,
  • legal scope of our Ontologic System (OS),
  • domain of our New Reality (NR) respectively
  • sovereign space of our OntoVerse (OV), also known as OntoLand (OL).

    Amazon is already acting in fields like

  • robotics,
  • cloud gaming,
  • Health Care Management System (HCMS) and Hospital Information System (HIS),
  • transport,
  • connected car,
  • etc.,

    which shows that it wants to steal our corporation, like for example the companies Alphabet (Google), Microsoft, Samsung, Volkswagen, and Co..

    In this relation, we investigated the latest statements and actions of the company done in conjunction with the listed fields and concluded that specifically the various collaborations with automobile manufacturers provide evidences for one or more acts of

  • illegal abuse of market power,
  • illegal agreement,
  • illegal conspiracy, and
  • other individual and orchestrated
    • violation of laws in general and
    • infringement of the rights of C.S. and our corporation in particular,

    The evidences also show that our allegations about its true intention are correct is to disturb the goals and even to threaten the integrity of our SOPR.
    Therefore, we had to blacklist the company Amazon in accordance with the related provisions of the AoA and the ToS.

    Exclusion of Samsung
    According to the related provisions of the AoA and the ToS of our SOPR it is not allowed to present any work included in the oeuvre of C.S. as an own idea, vision, creation, invention, or achievement, specifically the cybernetic self-reflection of C.S. and the Ontologic System (OS) with its Calibre/Caliber, Ontologic System Architecture (OSA), Ontologic System Components (OSC), and Ontologic Applications and Ontologic Services (OAOS), as well as the Ontoscope (Os) in whole or in part.
    In this relation, we investigated the statements and actions of the company Samsung, and concluded that they were done intentionally or deliberately. This woud not be the real problem if it licenses our works of art. But its confuse statements showed that the company did not know what it has copied, or better said, what it has to copy to cause the greatest damage to C.S. and our corporation.
    Obviously the company acted deliberately, because it said in a press release that

  • its artificial humans or human-like chatbots
    • are no copies of real humans,
    • cannot be exact copies of existing humans, and
    • have no embodiments actually,

    and

  • "[t]here are millions of species on our planet, and we hope to add one more".

    But C.S. is real and the true creator of the OS and the OV, which

  • is a cybernetic self-reflection as a proposition of an ontological argument or ontological proof of the own existence and also
  • includes artificial life, artificial human (life), chatbots, and everything else presented by Samsung.

    Therefore, C.S. is not merely a new species.
    All these statements show that the company Samsung refuses to acknowledge the rights of C.S. and our corporation even beyond the copyright and the property right by questioning the personal rights of C.S..
    It even presented itself as the creator of the virtual inhabitants of our Ontoverse and in this way even tried to present an own work of art including an own ontological argument or ontological proof to circumvent our copyright and licenses. Needless to say that this does not work from the philosophical, technological, and legal points of view, because it merely edited and implemented the related parts of our OS but did not create an own original and unique expression of idea.
    It also explained it sometimes in this way, sometimes in the other way, and eventually in a way that confused the public and failed to keep the hype.
    Exactly that documented confusion proves that the intention of Samsung was not to create a chatbot based on the field of Artificial Intelligence (AI) and the reproduction of our Ontologic System Components (OSC), but to damage our interests as much as possible.
    Furthermore, Samsung has no right to decide how we capture and handle signals and data in our digital interest.
    Like the members of the Android Open Handset Alliance led by the subsidiary Alphabet→Google, better known as the Android Consortium, and the company Huawai and many other members of the the Android Green Alliance led by Huawei, Samsung also acted as a proxy for the company Google. This is emphasized by the fact that its partial clone of our OntoBot component is not relevant anymore as well in favour of the partial clone of our Ontobot called Google Assistant.
    Therefore, we had to blacklist the company Samsung in accordance with the related provisions of the AoA and the ToS.
    We also raise the question if the company Samsung has to get a lifetime ban.

    Further steps
    There is considerable and substantial reason to fear that those experts from Silicon Valley, Detroit, and elsewhere have worsened the problem by such an extent in the last 400 days that it is not solvable anymore without keeping federal authorities out.
    There is not much convincing reason for keeping up that farce.

    By the way: If they refuse to collaborate with us, then one of the last arguments against blowing the whistle is no longer given.

    Not being nice becomes unsustainable expensive.


    11.January.2020

    Comment of the Day

    "Being nice is not an offer or an argument, but an unconditional attitude.", [C.S., Today]


    13.January.2020

    10:00 UTC+1
    One OS

    It is not n+1 but 1+n.
    Our actual offers have been communicated exhaustively since the 27th of October 2019, are on the table (see the Ontonics Further steps of the 1st and 10th of January 2020), and also constitute the new basis for a potential agreement with our SOPR.
    See also the note Ontonics Further steps of the 9th of January 2020 where the

  • Grid, Cloud, Edge, and Fog Computing (GCEFC) metaplatforms, or being precise, the related parts of our ON, OW, and OV platforms are collected and
  • individual platforms have to be hooked into or onto, and executed, operated, orchestrated, and so on, like for example systems and platforms of the fields of (note the partially wrong designations)
  • carrier-grade communications service provider Grid, Cloud, Edge, and Fog Computing (GCEFC),
  • SoftBionics (SB),
  • Business Intelligence (BI), Visualization, and Analytics (BIVA), and Data Science and Analytics (DSA), including
    • statistical learning and analysis,
    • data mining,
    • Big Data Fusion (BDF),
    • Big Data Processing (BDP), and
    • Big Data Analytics (BDA),
  • Intelligent Personal Assistant (IPA),
  • Mixed Reality (MR) cloud,
  • Cyber-Physical System (CPS), mirror world, and digital twin,
  • Autonomous System (AS) and Robotic System (RS),
  • vehicle platform, including automotive platform,
  • and so on,

    under our Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions.

    Please note once again that if we have to go to the courts, then

  • the conspiracy will be documented for another time, and
  • we will win sufficient legal matter and the other side will get serious problems with the authorities as well.

    Most potentially, both will lead to a development that will end quite quickly in insolvencies.

    10:00 and 23:00 UTC+1
    SOPR #267

    *** Sketching mode ***
    Topics

  • Legal matter
  • Infrastructure
  • Exclusion of Qualcomm
  • Exclusion Rakuten
  • Exclusion of Mercedes-Benz
  • Exclusion of Bayerische Motorenwerke
  • Exclusion of Toyota
  • Lifetime ban of Volkswagen
  • Lifetime ban of IBM

    Legal matter
    After we discussed multiple times without any improvement over the last months that our

  • rights, including the personal rights of C.S., are not protected and
  • sovereingty is not respected

    by the government, industry, and research community of the United States of America (U.S.A.) (see our investigations) and made careful in-depth considerations, we came to the only conclusion that we have to withdraw the U.S.A. one discount on the fixed fees and relative shares (plus 1.25%) in accordance with the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR).

    The discounts will be granted again, when the minimal requirements are fulfilled by the government of the U.S.A. and its economic industry and research community.
    We retain the right for further suitable measures if the situation does not improve.

    We will also take a look at other countries, specifically the P.R.China and Republic of India.

    We discussed the opening of our works of arts respectively giving the allowance of licensing them as the only possibility to

  • restore law and order, and
  • provide a way that external entities can continue with their businesses.

    We also made clear that said external entities have to focus on their businesses and not on our businesses.
    We also made clear multiple times that we do not have to open our works of art respectively give said allowance at all.
    Neither governments, inudstries, nor research communities have followed our recommendations and suggestions.
    Instead, they all continued with their foul plays and conspiracies, specifically by now focusing on our SOPR and technologies, goods, and services based on works of art, which again repeats only all the illegal practices.

    The situation is even more worse. We

  • do have rights that have to be kept intact for the reason of democracy, and
  • do deserve a share for our
    • accomplishments in general and
    • licensing our ArtWorks (AWs) and further Intellectual Properties (IPs) in particular.

    But even

  • under terms and conditions that are unfair, non-reasonable, and non-customary for us and
  • ridiculously low fixed fees and relatively low shares License Model (LM)

    entities are not able to comply and pay. The only viable way is that they either

  • sell their business units, subsidiaries, or even whole companies, as preferred by us,
  • reproduce and perform our AWs and IPs only to a reduced extent or anymore respectively we only open a reduced range of our Ontologic System Components (OSC), Ontoscope Components (OsC), and Ontologic Applications and Ontologic Services (OAOS) for licensing, or
  • just cease operations.

    Neither governments, inudstries, nor research communities have followed our recommendations and suggestions.
    Instead, they all continued with their cheatings and conspiracies, specifically by now focusing on our SOPR and technologies, goods, and services based on works of art, which again repeats only all the illegal practices.

    As already mentioned in for example the issue ... of the ... and can be seen with the recent announcements of exclusions and lifetime bans, we expect more and more that we have to go to the authorities to enforce our obvious rights. But we also make clear that then there is even much less need for an out-of-court agreement.

    We also would like to recall once again that the cryptocurrencies like e.g. Bitcoin, are illegal OAOS and prohibited.
    We already demanded that marketplaces stop the trading of cryptocurrencies and other illegal digital and virtual currencies immediately.
    Neither ggovernments, inudstries, nor research communities have followed our recommendations and suggestions, and still allow operation and trading of those digital currencies.

    Infrastructure
    Because our

  • Ontologic Net (ON), which is the successor of the Internet,
  • Ontologic Web (OW), which is the successor of the World Wide Web (WWW) and the Semantic (World Wide) Web (SWWW), and
  • Ontologic uniVerse (OV), or Ontoverse, which is the successor of the reality, also called the New Reality (NR),

    systems and platforms of the fields of

  • Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS),
  • High Performance and High Productivity Computing System (HP²CS),
  • carrier-grade communications service provider Grid, Cloud, Edge, and Fog Computing (GCEFC) or carrier cloud,
  • SoftBionics (SB),
  • Business Intelligence (BI), Visualization, and Analytics (BIVA), and Data Science and Analytics (DSA), including
    • statistical learning and analysis,
    • data mining,
    • Big Data Fusion (BDF),
    • Big Data Processing (BDP), and
    • Big Data Analytics (BDA),
  • Intelligent Personal Assistant (IPA),
  • Autonomous System (AS) and Robotic System (RS),
  • vehicle platform,
  • Mixed Reality Environment (MRE),
  • mirror world,
  • and so on,

    are considered the foundations of the infrastructure of our SOPR {homogenous definition required; SOPR infrastructure is a part of our OS and its ON, OW, and OV, or the partial implementation}.

    Exclusion of Qualcomm
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Qualcomm to question our competences, disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of Qualcomm, including joint ventures with for example the company Alphabet (Google) and Volkswagen, and also the members of the Android Open Handset Alliance led by the subsidiary Alphabet→Google, better known as the Android Consortium, and the company Huawai and many other members of the the Android Green Alliance led by Huawei.

    Exclusion of Rakuten
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Rakuten to question our competences, disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.

    Exclusion of Mercedes-Benz
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Mercedes-Benz to blackmail us alone and in collaboration, question our competences, and disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of the Mercedes-Benz, including Geely(?).

    Exclusion of Bayerische Motorenwerke
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Bayerische Motorenwerke to blackmail us alone and in collaboration, question our competences, and disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of the Bayerische Motorenwerke, including ....
    announcement of exclusion; continued to blackmail us alone and in collaboration; still focuses on our technologies (e.g. electric energery storage technologies, Zero Gravity Manned Aerial Vehicle (ZGMAV) and Zero Gravity Unmanned Aerial Vehicle (ZGUAV) technologies, etc.), agreement for IVI with TV

    Exclusion of Toyota
    The latest developments may suggest that we reached a point, where we have to act in accordance with the provisions included in the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR.
    Therefore, with the next serious attempt of Toyota to blackmail us alone and in collaboration, question our competences, and disturb the goals and even threaten the integrity of our SOPR we will put the companies on our blacklist.
    Please note that this exclusion would affect all shareholders and supporting entities of the Toyota, including ....

    Lifetime ban of Volkswagen
    The company Volkswagen or Porsche SE is still refusing to

  • obey to reality, specifically that
    • it is not in the position to make any decisions and demands since several years already,
    • it is the defendant of our lawsuit,
    • its suggestion of an equal partnership to commercialize the oeuvre of C.S. exlusively is just only ridiculous in total contrast to our takeover offer for Volkswagen for 6 billion Euro for at least 80% of its shares without that blocking minority, and
    • it must be happy if it is allowed to become a member of our SOPR at all,

    and

  • provide for a more harmonious business environment.

    Therefore, our SOPR has to issue our last official warning to it:
    If the company Volkswagen

  • does not stop immediately with blackmailing C.S. and our corporation alone and in collaboration with dubious, illegal, and even serious criminal business practices, such as
    • mimicking C.S. and our corporation,
    • stealing the shows of C.S. and our corporation,
    • stealing the AWs and IPs of C.S. and our corporation,
    • damaging the values of properties owned by C.S.,
    • disturbing the other business activities of our corporation,
    • disturbing the goals and even threatening the integrity of our SOPR, and
    • abusing its market power,
    • conducting conspiracies,

    and

  • makes one of the next anticipated steps,

    then our SOPR might impose a lifetime ban of membership.

    Lifetime ban of IBM
    The company IBM is still refusing to

  • obey to reality, specifically that
    • it is not in the position to make any decisions and demands since several years already,
    • its suggestion of an equal partnership to commercialize the oeuvre of C.S. exlusively is just only ridiculous in total contrast to our takeover offer for IBM for 52.5 bn U.S. Dollar, and
    • it must be happy if it is allowed to become a member of our SOPR at all,

    and

  • provide for a more harmonious business environment.

    Therefore, our SOPR has to issue our last official warning to it:
    If the company IBM

  • does not stop immediately with blackmailing C.S. and our corporation alone and in collaboration with dubious, illegal, and even serious criminal business practices, such as
    • mimicking C.S. and our corporation,
    • stealing the shows of C.S. and our corporation,
    • stealing the AWs and IPs of C.S. and our corporation,
    • damaging the values of properties owned by C.S.,
    • disturbing the other business activities of our corporation,
    • disturbing the goals and even threatening the integrity of our SOPR, and
    • abusing its market power,
    • conducting conspiracies,

    and

  • makes one of the next anticipated steps,

    then our SOPR might impose a lifetime ban of membership.


    16.January.2020

    11:21 UTC+1
    SOPR #268

    *** Work in progress - filling missing items, reducing redundancies ***
    Topics

  • Legal matter
  • License Model (LM)
  • Further steps

    Legal matter
    As we already mentioned before, the inevitable has happened in the last year and the solutions presented by the industry leaders are not convincing and acceptable, but only deepened the overall problem and made it more clear:

  • On the one side, we had to conclude that many members of our SOPR would not be able to pay our royalties, even not if we set the fixed fees and relative shares at ridiculously low levels and under terms and conditions that are not FRANDAC for us. In fact, it does not matter at all if a relative share of the overall revenue is set at 5% or 7% (see issue #267 of the 13th of January 2020).
  • On the other side, we said we will not make such ridiculously expensive presents only to get the next provocation, conspiracy, etc.. Like hell we will feed the entire global industry.

    As a first measure, we reduced the range of licensing options and put parts of our Ontologic System under the exclusive management of our SOPR and keep them under the power of control of our SOPR.
    But even this measure does not

  • compensate the damages and
  • restore our
    • advantages due to the exclusive exploitation rights as a creator,
    • advantages due to being a pioneer and a first mover,
    • advantages due to follow-up opportunities, and
    • momentum frustrated.

    In addition, we had to notice that illegal strategies, activities, and conspiracies have been started much earlier and are much more perfidious, elaborated, and worse than initially thought, as already noted in the issue #266 of the 10th of January 2020.
    As a second measure, we showed that the exclusive infrastructure of our SOPR comprises more parts.

    Honestly, we have seen developments related to our OS and our SOPR, but we have not seen at first all of those activities, which have begun years before we established our SOPR. Only recently, we took a deeper look at them and saw what was going on and had to conclude that ...

    Simply said, we would never get there where we have to be. But we will not run after our own properties anymore.
    Therefore, we are thinking about demanding the common triple damage compensations in addition to the share of up to 100% of the profit generated illegally.

    The other side has now to make the next step that is to pay for reproducing and performing our OS and fulfill the other demands and requirements.

    We will see in some weeks if we have to send the cavalery or not.

    License Model (LM)
    With the issues #259 of the 17th of December 2019, #261 of the 28th of December 2019, and #266 of the 10th of January 2020 we have done the next revision of our LM.

    Also note that the fees for hardware also are subject to our discount, which means 5% with all 7 discounts to 13.75% without discounts.

    If it is too expensive, then sell or we do the job or both.

    Further steps
    We are recalling the creation and development of our OS to draw the white, yellow, or red line more and more precisely .
    By improving the precision we are also specifying more precisely and strengthening our legal claims and showing others infringements of our rights.
    We will see what remains of the cloud computing platforms when we have pushed them back behind said line, which would also be a part of the so-called clear cut.


    18.January.2020

    10:00, 17:43, and 32:29 UTC+1
    Clarification

    *** Work in progress - better wording, explanation, and epilog, SOP originally designed for IPC but Evoos too ***
    In short: cloud computing is not the legal loophole and not the breach in our legal fortification wall. Quite contrary, the cloud is already dead, or being more precise, was literally spoken vaporware all the time or even has not existed at all.

    When finalizing the matter for the Website update of the 9th of March 2019 (see also the OntoLix and OntoLinux Website update of the 10th of March 2019), we were able to view the overall situation in a more complete way.
    Here, within the framework of the big scam, there was and still is also manipulation and serious crime on the part of the state and the private sectors, acting alone and above all together.
    Government agencies, like for example DARPA, NSF, NIST, NASA, etc., have been already named multiple times by us in the past. We also got an evidence that explains our wondering why the company IBM is acting at the forefront all the time. In fact, IBM is collaborating with Google and both are collaborating with the NASA in the field of cloud computing as well.

    Grid computing
    An online encyclopedia about the field of grid computing: "Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing [computers] in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers.[1] [...]
    Grids are a form of distributed computing whereby a "super virtual computer" is composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.
    [...]

    History
    The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor of utility computing (1961): computing as a public utility, analogous to the phone system.[8][9]
    CPU scavenging and volunteer computing were popularized beginning in 1997 [...].
    The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together [...].
    In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid) and earlier utility computing. Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems [...]."

    Cloud computing
    An online encyclopedia about the field of cloud computing: "Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.
    [...]
    The availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture and autonomic and utility computing has led to growth in cloud computing.[7 [Cloud Computing: Clash of the clouds. [2009]]][8][9 [What cloud computing really means. [2008]]]
    [...]
    The term cloud was used to refer to platforms for distributed computing as early as 1993, when Apple spin-off General Magic and AT&T used it in describing their (paired) Telescript and PersonaLink technologies.[15 [AT&T (1993). "What Is The Cloud?". "You can think of our electronic meeting place as the Cloud. PersonaLink was built from the ground up to give handheld communicators and other devices easy access to a variety of services. [...] Telescript is the revolutionary software technology that makes intelligent assistance possible. Invented by General Magic, AT&T is the first company to harness Telescript, and bring its benefits to people everywhere. [...] Very shortly, anyone with a computer, a personal communicator, or a television will be able to use intelligent assistance in the Cloud. And our new meeting place is open, so that anyone, whether individual, entrepreneur, or multinational company, will be able to offer information, goods, and services."]] In Wired's April 1994 feature "Bill and Andy's Excellent Adventure II", Andy Hertzfeld commented on Telescript, General Magic's distributed programming language:
    "The beauty of Telescript ... is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create sort of a virtual service. No one had conceived that before. The example Jim White [the designer of Telescript, X.400 and ASN.1] uses now is a date-arranging service where a software agent goes to the flower store and orders flowers and then goes to the ticket shop and gets the tickets for the show, and everything is communicated to both parties."[17]
    [...]
    The use of the cloud metaphor for virtualized services dates at least to General Magic in 1994, where it was used to describe the universe of "places" that mobile agents in the Telescript environment could go.
    [...]
    The use of the cloud metaphor is credited to General Magic communications employee David Hoffman, based on long-standing use in networking and telecom. In addition to use by General Magic itself, it was also used in promoting AT&T's associated PersonaLink Services.[22]
    [...]
    In August 2006, Amazon created subsidiary Amazon Web Services and introduced its Elastic Compute Cloud (EC2).
    In April 2008, Google released the beta version of Google App Engine.[23]
    In early 2008, NASA's OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[24]
    [...]
    In 2008, the U.S. National Science Foundation began the Cluster Exploratory program to fund academic research using Google-IBM cluster technology to analyze massive amounts of data,[27]
    [...]
    In February 2010, Microsoft released Microsoft Azure, which was announced in October 2008.[28]
    In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offering cloud-computing services running on standard hardware. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. [...]
    On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet.[36] Among the various components of the Smarter Computing foundation, cloud computing is a critical part. On June 7, 2012, Oracle announced the Oracle Cloud.[37] This cloud offering is poised to be the first to provide users with access to an integrated set of IT solutions, including the Applications [or Software as a Service] (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers.[38][39][40]
    In May 2012, Google Compute Engine was released in preview, before being rolled out into General Availability in December 2013.[41]
    [...]

    Similar concepts
    The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. [...] The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system-level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing [(AC)] automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.[42]
    Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.[42 [Cloud Computing Uncovered: A Research Landscape. [2012]]]
    Cloud computing shares characteristics with:

  • Client-server model - Client-server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).[43]
  • Computer bureau - A service bureau providing computer services, particularly from the 1960s to 1980s.
  • Grid computing - A form of distributed and parallel computing, whereby a 'super and virtual computer [or virtual supercomputer]' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
  • Fog computing - Distributed computing paradigm that provides data, compute, storage and application services closer to client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client side (e.g. mobile devices), instead of sending data to a remote location for processing.
  • Mainframe computer - Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.
  • Utility computing - The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[44][45]
  • Peer-to-peer - A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
  • Green computing
  • Cloud sandbox - A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.

    Service models
    Though service-oriented architecture [(SOA)] advocates "Everything as a service" (with the acronyms EaaS or XaaS,[64] or simply aas), cloud-computing providers offer their "services" according to different models, of which the three standard models per NIST are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).[63] These models offer increasing abstraction; they are thus often portrayed as a layers in a stack: infrastructure-, platform- and software-as-a-service, but these need not be related.

    Infrastructure as a service (IaaS)
    [...]
    The NIST's definition of cloud computing describes IaaS as[:] "[The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources] where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."[63 [The NIST Definition of Cloud Computing (Technical report). National Institute of Standards and Technology: U.S. Department of Commerce. (September 2011).]]
    IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.[...]

    Platform as a service (PaaS)
    The NIST's definition of cloud computing defines Platform as a Service as:[63 [The NIST Definition of Cloud Computing (Technical report). [2011]]]
    The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
    PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming-language execution environment, database, and web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers."

    Comment
    Indeed, there were some rudimentary elements of everything, but not the whole system, which is our OS. For example, no

  • SoftBionics (SB), including AI, ML, CV, SLAM, CAS, SI, and so on,
  • High Performance and High Productivity Computing System (HP²CS),
  • Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS),
  • Intelligent Personal Assistant (IPA),
  • voice-based system,
  • Multimodal User Interface (MUI),
  • Mixed Reality (MR),
  • etc., etc., etc..

    Obviously, the listed QoS and reliability problems and shared characteristics are about the field of Cloud Computing of the second generation (CC 2.0) and hence about our OS.
    Also note that

  • on the one hand the views of practitioners of SOA and cloud computing NIST experts suggest that the fields of SOx, and grid and cloud computing are not connected per se respectively on all layers of the NIST cloud computing stack and
  • on the other hand our description of the integrating Ontologic System Architecture (OSA) includes both views of SOA practitioners and NIST experts, which is also reflected in the description of microservices (see the quote about below).

    Infrastructure as a Service (IaaS)
    An online encyclopedia about the field of Infrastructure as a Service (IaaS): "Infrastructure as a service (IaaS) are online services that provide high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc. A hypervisor [...] runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements.

    Overview
    Typically IaaS involves the use of a cloud orchestration technology like Open Stack, Apache Cloudstack or Open Nebula. This manages the creation of a virtual machine and decides on which hypervisor (i.e. physical host) to start it, enables VM migration features between hosts, allocates storage volumes and attaches them to VMs, usage information for billing and lots more.
    An alternative to hypervisors are Linux containers, which run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. Containerisation offers higher performance than virtualization, because there is no hypervisor overhead
    [...]
    The NIST's definition of cloud computing defines infrastructure as a service as:[3 [The NIST Definition of Cloud Computing (Technical report). [2011]]]
    The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).
    According to the Internet Engineering Task Force (IETF), the most basic cloud-service model is that of providers offering IT infrastructure - virtual machines and other resources - as a service to subscribers.
    IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure.[4][unreliable source?] In this model, the cloud user patches and maintains the operating systems and the application software."

    Carrier cloud (computing)
    An online encyclopedia about the field of carrier cloud (computing): "In cloud computing a carrier cloud is a class of cloud that integrates wide area networks (WAN) and other attributes of communications service providers' carrier grade networks to enable the deployment of highly demanding applications in the cloud. In contrast, classic cloud computing focuses on the data center, and does not address the network connecting data centers and cloud users. This may result in unpredictable response times and security issues when business critical data are transferred over the Internet.
    [...]
    The advent of virtualization technology and cost effective computing hardware as well as ubiquitous Internet connectivity enabled a first wave of cloud services starting in the first years of the 21st century.[1][2][3][4]
    But many businesses and other organizations hesitated to move more demanding applications from on-premises dedicated hardware into private or public clouds. As a response, communications service providers started in the 2010/2011 time frame to develop carrier clouds that address perceived weaknesses in existing cloud services.[5 [Cloud Services: Carriers Want Cloud Control [...] 2011]]
    [...]
    Carrier clouds encompass data centers at different network tiers and wide area networks that connect multiple data centers to each other as well as to the cloud users. Links between data centers are used, for instance, for failover, overflow, backup, and geographic diversity. Carrier clouds can be set up as public, private, or hybrid clouds. The carrier cloud federates these cloud entities, using a single management system to orchestrate, manage, and monitor data center and network resources as a single system."]

    Fog computing
    An online encyclopedia about about the field of fog computing: "Fog computing[1 [Fog Computing introduction to a New Cloud Evolution. [2012]]][2 [Fog computing and its role in the internet of things. [2012]][3 [Connected Vehicles, the Internet of Things, and Fog Computing. [2011]][4] or fog networking, also known as fogging,[5 [IoT, from Cloud to Fog Computing. [2015]]][6] is an architecture that uses edge devices to carry out a substantial amount of computation, storage, communication locally and routed over the internet backbone.

    Concept
    Fog computing can be perceived both in large cloud systems and big data structures, making reference to the growing difficulties in accessing information objectively. This results in a lack of quality of the obtained content. The effects of fog computing on cloud computing and big data systems may vary. However, a common aspect is a limitation in accurate content distribution, an issue that has been tackled with the creation of metrics that attempt to improve accuracy.[7 [[eXtensible Messaging and Presence Protocol (]XMPP[)] Distributed Topology as a Potential Solution for Fog Computing. [2013]]]
    Fog networking consists of a control plane and a data plane. For example, on the data plane, fog computing enables computing services to reside at the edge of the network as opposed to servers in a data-center. Compared to cloud computing, fog computing emphasizes proximity to end-users and client objectives (e.g. operational costs, security policies, resource exploitation), dense geographical distribution and context-awareness (for what concerns computational and IoT resources), latency reduction and backbone bandwidth savings to achieve better quality of service (QoS)[8 [QoS-aware Deployment of IoT Applications Through the Fog. [2017]] and edge analytics/stream mining, resulting in superior user-experience[9] and redundancy in case of failure while it is also able to be used in Assisted Living scenarios.[10][11][12][13][14][15]
    Fog networking supports the Internet of Things (IoT) concept, in which most of the devices used by humans on a daily basis will be connected to each other. Examples include phones, wearable health monitoring devices, connected vehicle and augmented reality using devices [...].[16][17][18][19][20]
    [The Space and naval WARfare systems command (]SPAWAR[) respectively Naval information WARfare SYStems COMmand (NAVWARSYSCOM)], a division of the US Navy, is prototyping and testing a scalable, secure Disruption Tolerant Mesh Network to protect strategic military assets, both stationary and mobile. Machine control applications, running on the mesh nodes, "take over", when internet connectivity is lost. Use cases include Internet of Things e.g. smart drone swarms.[21]
    ISO/IEC 20248 [Automatic Identification and Data Capture Techniques - Data Structures - Digital Signature Meta Structure] provides a method whereby the data of objects identified by edge computing using Automated Identification Data Carriers [AIDC], a barcode and/or RFID tag, can be read, interpreted, verified and made available into the "Fog" and on the "Edge," even when the AIDC tag has moved on.[22 [Mobile Cloud Computing: Foundations and Service Models. [2017]]

    History
    In 2011, the need to extend cloud computing with fog computing emerged, in order to cope with huge number of IoT devices and big data volumes for real-time low-latency applications.[2 [Fog computing and its role in the internet of things. [2012]]][3 [Connected Vehicles, the Internet of Things, and Fog Computing. [2011]]]
    On November 19, 2015, Cisco Systems, ARM Holdings, Dell, Intel, Microsoft, and Princeton University, founded the OpenFog Consortium to promote interests and development in fog computing.[23 [Is Fog Computing the Next Big Thing in the Internet of Things. [2016]]] [...]

    Definition
    Both cloud computing and fog computing provide storage, applications, and data to end-users. However, fog computing is closer to end-users and has wider geographical distribution.[25]
    'Cloud computing' is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.[26] Cloud computing can be a heavyweight and dense form of computing power.[...]
    The term 'Fog Computing' was defined by [...] Jonathan Bar-Magen Numhauser in the year 2011 as part of his PhD dissertation project proposal. In January 2012 he presented the concept in the Third International Congress of Silenced Writings in the University of Alcala and published in an official source[1 [Fog Computing introduction to a New Cloud Evolution. [2012]]][7 [[eXtensible Messaging and Presence Protocol (]XMPP[)] Distributed Topology as a Potential Solution for Fog Computing [2013]]].
    Also known as edge computing or fogging, fog computing facilitates the operation of compute, storage, and networking services between end devices and cloud computing data centers. While edge computing is typically referred to the location where services are instantiated, fog computing implies distribution of the communication, computation, storage resources, and services on or close to devices and systems in the control of end-users.[27 [Fog and IoT: An Overview of Research Opportunities. [2016]]][28 [Reliable Capacity Provisioning for Distributed Cloud/Edge/Fog Computing Applications. [2017]]] Fog computing is a medium weight and intermediate level of computing power.[29 [Fog Computing for Sustainable Smart Cities: A Survey. [2017]]] Rather than a substitute, fog computing often serves as a complement to cloud computing.[30]
    National Institute of Standards and Technology in March, 2018 released a definition of fog computing adopting much of Cisco's commercial terminology as NIST Special Publication 500-325, Fog Computing Conceptual Model, that defines fog computing as a horizontal, physical or virtual resource paradigm that resides between smart end-devices and traditional cloud computing or data center.[31] This paradigm supports vertically-isolated, latency-sensitive applications by providing ubiquitous, scalable, layered, federated, distributed computing, storage, and network connectivity. Thus fog computing is most distinguished by distance from the edge. In the theoretical model of fog computing, fog computing nodes are physically and functionally operative between edge nodes and centralized cloud.[32] Much of the terminology is undefined, including key architectural terms like "smart", and the distinction between fog computing from edge computing is not generally agreed. Fog computing is more energy-efficient than cloud computing.[33]"

    Comment
    What a marketing nonsense the fields of cloud, edge, and fog computing truly are. We already have presented a holistic technology with our OS and its ON, OW, and OV, and these paradigms of networking and computing are included, obviously and doubtlessly.
    As in the case of voice-based systems and Intelligent Personal Assistant (IPA), cloud computing and fog computing came back to our OS respectively our OS was always the source of inspiration and blueprint.

    Amazon Elastic Compute Cloud (EC2)
    An online encyclopedia about the Amazon Elastic Compute Cloud: "Amazon Elastic Compute Cloud (EC2) forms a central part of Amazon.com's cloud-computing platform, Amazon Web Services (AWS), by allowing users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine [(VM)], which Amazon calls an "instance", containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servers - hence the term "elastic"."

    OpenNebula
    An online encyclopedia about OpenNebula: "OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service [(IaaS)].
    The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient solutions[...] for managing virtual machines on distributed infrastructures. It was also important that these solutions[...] had the ability to scale at high levels.
    Initial release March 1, 2008"]
    But we have no evidence that shows the true start of the project.

    In a report of the European Commission about the future of cloud computing and publicated in the year 2009 it is said:
    "Only few cloud dedicated research projects in the widest sense have been initiated - most prominent amongst them probably OpenNebula.""

    Comment
    The date of the project start is not clearly given and verified by a reference. Even more important is the date of the first release. Here we see a typical attempt to overcome the time gap. Honestly, we have to question the date 2005, because in 2005 we had only grid and cloud computing of the first generation (see the quote about Amazon Elastic Compute Cloud above) and it does not take 3 years for implementing the first public release of such a software, especially when all basic elements are already existing.

    OpenStack
    An online encyclopedia about OpenStack: "OpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers.[2] The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.
    OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA.
    In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack.[16][17] The mission statement was "to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable".[18]
    The OpenStack project intended to help organizations offer cloud-computing services running on standard hardware. The community's first official release [...] appeared three months later on 21 October 2010,[19] [...]. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. The original cloud architecture was designed by the NASA Ames Web Manager, [...] and was a 2009 open source architecture called OpenNASA v2.0.[21] The cloud stack and open stack modules were merged and released as open source by the NASA Nebula[22] team in concert with Rackspace."

    CloudStack
    An online encyclopedia about CloudStack: "CloudStack is [...] cloud computing software for creating, managing, and deploying infrastructure cloud services. It uses existing hypervisors [...] for virtualization.
    [...]
    CloudStack was originally developed by Cloud.com, formerly known as VMOps.[4]
    VMOps was founded [...] in 2008.[5][6] [...] The company changed its name from VMOps to Cloud.com on May 4, 2010, when it emerged from stealth mode by announcing its product.[8][4][9]"

    Tuple space
    An online encyclopedia about the field of tuple space architecture or model or paradigm: "A tuple space is an implementation of the associative memory paradigm for parallel [and] distributed computing. It provides a repository of tuples that can be accessed concurrently. As an illustrative example, consider that there are a group of processors that produce pieces of data and a group of processors that use the data. Producers post their data as tuples in the space, and the consumers then retrieve data from the space that match a certain pattern. This is also known as the blackboard metaphor. Tuple space may be thought as a form of distributed shared memory.
    Tuple spaces were the theoretical underpinning of the Linda language
    [...]
    The most common software pattern used in JavaSpaces is the Master-Worker pattern. The Master hands out units of work to the "space", and these are read, processed and written back to the space by the workers. In a typical environment there are several "spaces", several masters and many workers; the workers are usually designed to be generic, i.e. they can take any unit of work from the space and process the task. [But this is not a typical utilization based on a tuple space, but still batch processing.]"

    Space-Based Architecture (SBA)
    An online encyclopedia about the field of Space-Based Architecture (SBA): "Space-based architecture (SBA) is a software architecture pattern for achieving linear scalability of stateful, high-performance applications using the tuple space paradigm. It follows many of the principles of representational state transfer (REST), service-oriented architecture (SOA) and event-driven architecture (EDA), as well as elements of grid computing. With a space-based architecture, applications are built out of a set of self-sufficient units, known as processing-units (PU). These units are independent of each other, so that the application can scale by adding more units.
    [...]

    Components of space-based architecture
    [...]
    Processing unit
    The unit of scalability and fail-over. Normally, a processing unit is built out of a POJO (Plain Old Java Object) container, such as that provided by the Spring Framework.
    Virtual middleware
    A common runtime and clustering model, used across the entire middleware stack. The core middleware components in a typical SBA architecture are:

  • Messaging grid: Handles the flow of incoming transaction as well as the communication between services
  • Data grid: Manages the data in distributed memory with options for synchronizing that data with an underlying database
  • Processing grid: Parallel processing component based on the master/worker pattern (also known as a blackboard pattern) that enables parallel processing of events among different services
  • [Deployment manager]"

    Comment
    See also the Multi-Agent System (MAS) Java Agent Development Environment (JADE), holons or holonic systems, and the basic properties, and other relevant properties and integrated parts of our OS.
    The master-worker pattern is something else related to batch processing utilized for high-volume and high-speed processing respectively High-Throughput Computing (HTC) (a computing paradigm that focuses on the efficient execution of a large number of loosely-coupled tasks), job schedulers, batch queues, and priority queues (see the two following quotes about the command pattern and the blackboard pattern below), also utilized with and integrated in grid computing systems and blackboard systems (see the quote and the related comment about tuple space above).
    Indeed, the master-worker pattern is used for the implementation of blackboard systems (see once again the quote about tuple space above), but in general using the master-worker pattern does not mean that the related system is a blackboard system.
    This also matches our impression and investigation of Google's cluster manager Borg utilized for HTC and its fundamentally different successor Kubernetes. But most importantly, it does not overcome the space and time gap, as is the case with microservices.

    Command pattern
    An online encyclopedia about the command pattern: "In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time.
    [...]
    The central ideas of this design pattern closely mirror the semantics of first-class functions and higher-order functions in functional programming languages. Specifically, the invoker object is a higher-order function of which the command object is a first-class argument.
    [...]

    Uses
    [...]
    Parallel Processing
    Where the commands are written as tasks to a shared resource and executed by many threads in parallel (possibly on remote machines - this variant is often referred to as the Master/Worker pattern)

  • Batch queue
  • [...]
  • Command queue
  • Function object
  • Job scheduler
  • Model-view-controller
  • Priority queue"]

    Blackboard pattern
    An online encyclopedia about the blackboard pattern: "In software engineering, the blackboard pattern is a behavioral design pattern[1] that provides a computational framework for the design and implementation of systems that integrate large and diverse specialized modules, and implement complex, non-deterministic control strategies.[2][1]
    [...]
    The blackboard model defines three main components:

  • blackboard - a structured global memory containing objects from the solution space
  • knowledge sources - specialized modules with their own representation
  • control component - selects, configures and executes modules.[2]

    [...]
    The blackboard pattern provides effective solutions for designing and implementing complex systems where heterogeneous modules have to be dynamically combined to solve a problem. This provides non-functional properties such as:

  • reusability
  • changeability
  • robustness.[2]

    The blackboard pattern allows multiple processes to work closer together on separate threads, polling and reacting when necessary."

    Microservices
    An online encyclopedia about the field of microservices: "Microservices are a software development technique - a variant of the service-oriented architecture (SOA) structural style - that arranges an application as a collection of loosely coupled services.[1 [Microservices: The Journey So Far and Challenges Ahead. [2018]]] In a microservices architecture, services are fine-grained and the protocols are lightweight.
    There is no single definition for microservices. A consensus view has evolved over time in the industry. Some of the defining characteristics that are frequently cited include: [... citations made from documents of the years 2015 to 2018]
    A microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend).[8] Rather it is a self-contained piece of business functionality with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategy perspective, microservices architecture essentially follows the Unix philosophy of "Do one thing and do it well".[9] Martin Fowler describes a microservices-based architecture as having the following properties:[2 [Microservices. [2014]]]

  • Lends itself to a continuous delivery software development process. A change to a small part of the application only requires rebuilding and redeploying only one or a small number of services.[10]
  • Adheres to principles such as fine-grained interfaces (to independently deployable services), business-driven development (e.g. domain-driven design).[11 [SOA in Practice. [2007]]]

    It is common for microservices architectures to be adopted for cloud-native applications, and applications using lightweight container deployment. [...B]ecause of the large number (when compared to monolithic application implementations) of services, decentralized continuous delivery and DevOps with holistic service monitoring are necessary to effectively develop, maintain, and operate such applications.[12 [Microservice Prerequisites. [2014]]] A consequence of (and rationale for) following this approach is that the individual microservices can be individually scaled. In the monolithic approach, an application supporting three functions would have to be scaled in its entirety even if only one of these functions had a resource constraint.[13] With microservices, only the microservice supporting the function with resource constraints needs to be scaled out, thus providing resource and cost optimization benefits.[14]

    History
    A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that many of them had been recently exploring.[15 [Microservices: yesterday, today, and tomorrow. [2017]]] In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those ideas as a case study in March 2012 at 33rd Degree [Conference for Java Masters ...] in Micro services - Java, the Unix Way,[16] as did Fred George[17 [MicroService Architecture: A Personal Journey of Discovery. [2013]]] about the same time. Adrian Cockcroft, former director for the Cloud Systems at Netflix,[18 [ Netflix heads into the clouds. [2012]]] described this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article [...].[19 [Microservices. [2014]]]
    Microservices is a specialization of an implementation approach for service-oriented architectures (SOA) used to build flexible, independently deployable software systems.[7 [Microservices in Practice, Part 1: Reality Check and Service Design. [2017]]] The microservices approach is a first realisation of SOA that followed the introduction of DevOps and is becoming more popular for building continuously deployed systems.[20]

    Service Granularity
    A key step in defining a microservice architecture is figuring out how big an individual microservice has to be. There is no consensus or litmus test for this, as the right answer depends on the business and organizational context.[21] For instance, Amazon's policy is that the team implementing a microservice should be small enough that they can be fed by two pizza.[2] [(A pizza box is a form factor of a rack case for computers or network switches.)] [...] But the key decision hinges around how "clean" the service boundary can be.
    On the opposite side of the spectrum, it is considered a bad practice to make the service too small, as then the runtime overhead and the operational complexity can overwhelm the benefits of the approach. When things get too fine-grained, alternative approaches must be considered - such as packaging the function as a library, moving the function into other microservices[7] or reducing their complexity by using Service Meshes[22 [Reducing Microservices Architecture Complexity with Istio and Kubernetes. [2019]]].

    Technologies
    Computer microservices can be implemented in different programming languages and might use different infrastructures. Therefore the most important technology choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (RESTful HTTP, messaging, GraphQL ...) [44]. In a traditional system most technology choices like the programming language impact the whole systems. Therefore the approach for choosing technologies is quite different.[45]
    Service mesh
    In a service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and sidecar proxy share a container, and the containers are managed by a container orchestration tool such as Kubernetes, [...], Docker Swarm, or [...]OS. The service proxies are responsible for communication with other service instances and can support capabilities such as service (instance) discovery, load balancing, authentication and authorization, secure communications, and others.
    In a service mesh, the service instances and their sidecar proxies are said to make up the data plane, which includes not only data management but also request processing and response. The service mesh also includes a control plane for managing the interaction between services, mediated by their sidecar proxies. There are several options for service mesh architecture: Istio (a joint project among Google, IBM, and [...]), Linkerd ([Cloud Native Computing Foundation (]CNCF[), a Linux Foundation] project ...[]), Consul ([...]) and others."

    Comment
    Microservices are a variant of SOA that also arranges an application as a collection of loosely coupled services and {technically correct but not historically, hence see below} microService-Oriented (mSO ) has features of the fields of Service-Oriented Architecture of the first generation (SOA 1.0) and Service-Oriented Computing of the first generation (SOC 1.0) and Software-Oriented Programming of the first generation (SOP 1.0).
    {statement not quite right, because the field of SOP originally designed for IPC with Evoos and the foundation of mSOA is already given with our Evoos} Microservices are based on grid computing 2.0 and cloud computing 2.0, which were created by us, and therefore could not exist in the year 2005 and 2006 at all and therefore could not belong to SOA 1.0 and SOC 1.0 and SOP 1.0.
    MicroService-Oriented (mSO ) is message based, but blackboard architecture is not, but JavaSpaces supports messaging in dynamic environments.
    Distributed operating system (Dos) (e.g. Apertos (Muse)) is for Ultra Large Distributed System (ULDS) massively distributed, loosely coupled.
    Once again, Kubernetes with worker nodes or Processing Units (PUs) seems to be a minimal implementation of blackboard systems like the

  • Java Agent Development Environment (JADE) intrinsically based on Peer-to-Peer (P2P) computing and loosely-coupled active entities and featuring container run-time, and
  • ActorAgent system based on the Space-Based Architecture (SBA) respectively Space-Based Agent System (SBAS) and featuring container.
  • SBA with mSOA with actor or agent or both as Processing Unit (PU) respectively as microservices, the PUs are fine-grained units of execution. (see also Muse, Apertos, and Cognac system based on Apertos)

    Howsoever, SOx, including SOC, SOP, SOA, and mSOA, is included in our OS since its start.

    Prior art
    Some works of prior art and their integrations

  • virtualization
    • Virtual Machine (VM),
    • operating system-level virtualization or containerization (e.g. our Evolutionary operating system (Evoos) and the Open Virtuozzo (OpenVZ)), and
    • hypervisor,
  • actor model and actor systems, including
    • actor operating systems (e.g. Apertos (Muse) and the Cognac system based on Apertos, and TUNES System), and
    • concurrent object-oriented actor-based system (e.g. Maude),
  • blackboard architectures, systems, applications, and services, including
    • systems of loosely-coupled applications and services,
    • tuple spaces,
    • Linda like systems,
    • SBA, and
    • agent-based systems, like for example space-based agent systems,
  • molecular architectures, systems, applications, and services, including
    • CHemical Abstract Machine (CHAM) based on very similar concepts and the same degree of parallelism like the tuple space architectural model or achitecture of Linda
  • Jini network architecture or technology
    • including the distributed object exchange and coordination mechanism JavaSpaces based on the tuple space paradigm, pattern, model, or architecture and
    • being used for the construction of Distributed Systems (DSs) in the form of modular co-operating services, also utilized for the realization of systems based on SOC, SOP, and SOA in the field of {other term?}business processes enabled by Jini
  • Jini + Grid Computing of the first generation (GC 1.0)
  • Multi-Agent System (MAS) based on tuple space (e.g. JavaSpaces) Java Agent Development Environment (JADE)
  • actor + agent = actoragent based on SBA based on tuple space (e.g. JavaSpaces) Space-Based Agent System (SBAS) Using JavaSpaces to create adaptive DSs
  • operating system-level virtualization or containerization + actoragent = actoragent containers based on SBA that provide Jini and JavaSpaces middleware to the role components hosted,
  • Scalable Infrastructure (SI) is a scalable communication framework based on Jini and JavaSpaces for
  • ServiceFrame service is an execution framework that contains specific functionality for advanced telecommunication and Internet services
  • Service-Oriented Computing of the first generation (SOC 1.0) based on Jini, JavaSpaces, Rio + grid computing + AC + ...
  • Semantic (World Wide) Web (SWWW) + grid computing = Semantic grid
  • Semantic (World Wide) Web (SWWW) + agent = Semantic agent (e.g. Nuin)
  • SOA + SWWW = SSOA
  • SOA + AC + SWWW == SOA 2005/2006 or Service-Oriented technologies of the second generation (SOx 2.0)
  • SOC federation, orchestration, ... but ...
  • Cognitive Grid but Evolutionary operating system (Evoos)

    Conclusion
    As we have proven, the same scandal in relation to the fields of CPS, IoT, and NES, the fraudulent entities do not overcome this time gap between the years 2005/2006 to 2008. As we already document in the case of Google's Borg and its successor Kubernetes, the history is made blurry. For example,

  • dates are not given exactly,
  • technical terms are not used in the common way but incorrectly, and
  • technical descriptions were crooked

    by using the contents of our websites as blueprint and for confusing the public.

    Here we see that the

  • time gap between 2005/2006 to 2008 is evident,
  • IaaS and PaaS, and also microServices-Oriented Architecture (mSOA), etc. are critical, and
  • underlying system itself is missing in the fields of SOx (SOC, SOP, SOA, and mSOA), grid, cloud, edge, and fog computing, ...
  • (SOx on the) foundational system level on which platforms, applications, and services, are managed, operated, executed, orchestrated, and so on.

    Before the start of OntoLinux there were only Microsoft with some unsuccessful marketing activities for the field of cloud computing of the first generation (CC 1.0) and Amazon with its subsidiary Amazon Web Services (AWS), that introduced its Elastic Compute Cloud (EC2). But that was merely renting out processing power (see once again the quote about Amazon Elastic Compute Cloud above) and not cloud computing of the second generation (CC 2.0) or even the other fields of buzzwords as we know them today respectively not our ON, OW, and OV.
    But only after we uploaded the presentation of our OS two months after Amazon, it became clear how the successor of the Internet, our ON, and the successor of the WWW and SWWW, our OW, and something totally new, our OV, will look like and it took around 2 more years until the first definitions of cloud computing in accordance with our OS were publicated and the first implementations of parts of our ON, OW, and OV were released.

    There is definitely a certain difference between the first generation and second generation of the related fields. Even if there would be no clear line, then the next added element at least shows that the line has been crossed, as we said in the past already.

    Obviously, our OS was taken as the source of inspiration and the blueprint and therefore the details of the transition from the first generation to the second generation of the related fields are not that relevant at all.
    Finally, there is one OS with its ON, OW, and OV, and what is called grid, cloud, edge, and fog computing, as well as future Internet and so on is only the use of wrong marketing terms for the related parts of our

  • technologies with its systems, platforms,
  • goods with its applications, devices, vehicles, and other things, and
  • services.

    As a very simple implication one has to recognize and acknowledge, there is no U.S.American, European, Chinese, Russian, and so on cloud, as is the case with the old Internet and the old WWW, but one OS.

    At least some main threads can be seen, that were taken to approach our OS with its OSA, OSC, and also ON, OW, and OV, and simulate an ordinary technological progress:

  • from computing and networking and "The the computer." to integration
  • form wireline and wireless to integration of both
  • from static over automatic to adaptive and proactive networking infrastructure
  • from batch processing and parallel processing over cluster computing, High-Throughput Computing of the first generation (HTC 1.0) and Grid Computing of the first generation (GC 1.0) over Cloud Computing of the first generation (CC 1.0) to Grid Computing of the second generation (GC 2.0), Cloud Computing of the second generation (CC 2.0) and edge computing to fog computing
    worker node or Processing Unit (PU) of batch processing and grid computing to cloud computing to SOx, SBA, and orchestration to edge and fog computing
    • IaaS and PaaS,
    • carrier cloud, telco cloud,
    • management and orchestration system,
    • multi-cloud computing system, dynamic federation system, and service meshing system with registry, broker, or similar facility for objects, signals, data, applications, services, etc., and
  • mobile communication, mobile computing
  • from SOC 1.0 (and 2.0) and SOA 2005/2006 to mSOA
  • from CPS 1.0, IoT 1.0, and NES 1.0, as well as UbiC 1.0 to CPS 2.0, IoT 2.0, and NES 2.0, as well as UbiC 2.0 to holistic integration
  • from hardware virtualization and operating system-level virtualization to network virtualization and Network Functions Virtualization (NFV) (2012) to holistic integration,
  • Network Operating Systems (NOSs) or operating system for networks
  • from High-Throughput Computing of the first generation (HTC 1.0) to High-Throughput Computing of the second generation (HTC 2.0), Big Data Processing (BDP), and Data Science and Analytics (DSA) to holistic integration

  • only HTC 1.0 and GC 1.0, potentially CC 1.0, but not GC 2.0 and Cloud Computing of the second generation (CC 2.0), as well as edge computing and fog computing
  • loosely coupled,
  • blackboard pattern, architecture, and system, loosely-coupled computers, tasks, applications and services, active entities, actors and agents, communication, tuple space, Linda, Space-Based Architecture (SBA)
  • self-contained

    But the problem is there is no integration of the main threads, which results in HTC 2.0, IaaS 2.0, PaaS 2.0, and SaaS 2.0, or EaaS 2.0, etc..

  • reflection
  • resilience, including fault tolerance, trustworthiness, including reliability, high availabiltiy, safety, security, performability, etc.
  • self-healing
  • proactivity
  • (SOx on the) foundational system level on which platforms, applications, and services, are managed, operated, executed, orchestrated, and so on.
  • successor of the Internet
  • successor of the WWW and SWWW
  • ...

    cloud (2.0) with

  • operating system-level virtualization or container technology
  • common grid vs. common cloud vs. carrier cloud
  • SBA(?)
  • mSOA
  • management, federation, orchestration, meshing ...(?)
  • voice-based systems
  • IPA
  • AS and RS
  • Business Intelligence (BI) vs. High-Throughput Computing (HTC) and Big Data Processing (BDP) vs. Data Science and Analytics (DSA)
  • real-time analytics
  • ...

    Most elements and their composition and integration by our Ontologic System Architecture (OSA) are already included in the OntoCore (OC) and OntoBot (OB) components.


    19.January.2020

    10:30 UTC+1
    Oh, yeah ...!

    We confirm to be conform with this: Take over in a period of transition.

    Btw.: As long as a publication is marked

  • Sketching mode,
  • Work in progress, or
  • Proof-reading mode

    nothing is fixed.


    21.January.2020

    12:10, 14:33, 17:25, 21:55, and 27:00 UTC+1
    Clarification

    *** Work in progress - some comments and explanations, and epilog missing, SOP originally designed for IPC with Evoos ***
    In the OntoLix and OntoLinux Further steps of the 20th of February 2019 we already

  • recalled that we also have the Software-Defined Networking (SDN) technology integrated with the Ontologic System Components (OSC) by the integrating Ontologic System Architecture (OSA) of our Ontologic System (OS) and also
  • made clear that Network Functions Virtualization (NFV) and Deep Packet Inspection (DPI) are complementing the SDN functions and can be integrated and utilized with the OSC of our OS in the same way like the SDN technology if required.

    In this way, we are able to run different network layer protocols and form different virtual networks (see also the OntoLix and OntoLinux Website update of 20th of February 2019).

    In the OntoLix and OntoLinux Further steps of the 20th of February 2019, the Website update of the 8th of March 2019, and the Clarification of the 10th of March 2019 we also said that NFV, DPI, SDN, mSOA, and managing and orchestration are critical.
    In the issue SOPR #265 of the 9th of January 2020 we already said as well that the fields of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), carrier cloud, telco cloud, etc. are critical.
    Both led to the Clarification of the 18th of January 2020, in which we recalled once again that the underlying system itself is missing in the fields of

  • Service-Oriented technologies (SOx), including
    • Service-Oriented Computing (SOC),
    • Service-Oriented Programming (SOP),
    • Service-Oriented Architecture (SOA), and
    • microServices-Oriented Architecture (mSOA),
  • Grid, Cloud, Edge, and Fog Computing (GCEFC), including
    • IaaS, PaaS, and SaaS, as well as EaaS,
    • carrier cloud and telco cloud,
    • and so on,

    which led our attention to NFV once again, which again led our attention to SDN once again.

    In fact, our Ontologic System (OS) based on our Evolutionary operating system (Evoos) is also about

  • operating system
    • networking function,
  • real-time systems,
  • virtualization
    • Virtual Machine (VM),
    • operating system-level virtualization or containerization, and
    • hypervisor,
  • middleware,
  • embedded systems,
  • Semantic (World Wide) Web (SWWW) SOx (SSOC, SSOP, SSOA, and mSSOA),
  • successor of the Internet, and
  • successor of the World Wide Web (WWW),

    which implies that network switches and (software-)routers, etc. are also managed, operated, executed, orchestrated, and so on with the related parts of our OS, so that we have to correct our related explanations (see also above) with the finding that SDN and NFV are effectively included in our OS by design.

    Hardware virtualization
    An online encyclopedia about the field of hardware virtualization: "Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform.[1][2] At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.[3]
    [...]
    Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats.
    [...]
    Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine.
    [...]
    Examples of virtualization scenarios:

  • Running one or more applications that are not supported by the host OS: A virtual machine running the required guest OS could permit the desired applications to run, without altering the host OS.
  • Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS.
  • Server virtualization: Multiple virtual servers could be run on a single physical server, in order to more fully utilize the hardware resources of the physical server.
  • Duplicating specific environments: A virtual machine could, depending on the virtualization software used, be duplicated and installed on multiple hosts, or restored to a previously backed-up system state.
  • Creating a protected environment: If a guest OS running on a VM becomes damaged in a way that is not cost-effective to repair, such as may occur when studying malware or installing badly behaved software, the VM may simply be discarded without harm to the host system, and a clean copy used upon rebooting the guest .

    Full virtualization
    In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family.

    Paravirtualization
    In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying[...] the "guest" OS.

    Operating-system-level virtualization
    In operating-system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system."

    Input/output (I/O) virtualization
    An online encyclopedia about the field of Input/output (I/O) virtualization: "Input/output (I/O) virtualization is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections.[1]
    The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs).[2] Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications.
    [...]
    Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers.
    [...]
    In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server.
    [...]
    However, increased utilization created by virtualization placed a significant strain on the server's I/O capacity. Network traffic, storage traffic, and inter-server communications combine to impose increased loads that may overwhelm the server's channels, leading to backlogs and idle CPUs as they wait for data.[4]
    Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual connections to both storage and network resources. In I/O intensive applications, this approach can help increase both VM performance and the potential number of VMs per server.[2]
    Virtual I/O systems that include quality of service (QoS) controls can also regulate I/O bandwidth to specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus increases the applicability of server virtualization [(hyperlink to Virtual Private Server (VPS))] for both production server and end-user applications.[4 [Virtualization's Promise And Problems. [2008]]]"

    Operating system-level virtualization
    An online encyclopedia about the field of operating system-level virtualization: "[Operating system]-level virtualization [or containerization] refers to an operating system paradigm in which the kernel allows the existence of multiple isolated user space instances. Such instances, called containers (Solaris, Docker), Zones (Solaris), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernel (DragonFly BSD), or jails (FreeBSD jail or chroot jail),[1] may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.
    [...]
    The term "container," while most popularly referring to OS-level virtualization systems, is sometimes ambiguously used to refer to fuller virtual machine environments operating in varying degrees of concert with the host OS [...].
    [...]
    With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another.
    Containerization has similarities to application virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only.
    [...]
    Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users.
    [...]
    Overhead
    Operating-system-level virtualization usually imposes less overhead than full virtualization because programs in OS-level virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine, as is the case with full virtualization ([...]) and paravirtualization ([...]). This form of virtualization also does not require hardware support for efficient performance.
    Flexibility
    Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. [...]
    Solaris partially overcomes the limitation described above with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones [...] are also available on x86-based Solaris systems, providing a complete Linux userspace and support for the execution of Linux applications [...]."

    Comment
    No prior art has been found.
    Indeed, everything in Unix is a file, which means that not only data are handled as a file, but also all processes, inter-process communcations between processes, pipes between software applications, and channels between computers and devices.
    But the original chroot jail does not qualify as process isolation and virtualization, because the chroot operation only changes the root folder of a system and the related rights of system users with and without administrative privilege, but adds no user space instance and also virtualizes nothing at all. As its name already clearly says, it is just somekind of a switch or management for file system namespace access. Accordingly, the developer handbook of freebsd.org says in chapter 3.5. Limiting your program's environment: "Root user can easily escape from chroot. Chroot was never supposed to be used as a security mechanism."

    FreeBSD jail
    An online encyclopedia about FreeBSD jail: "[...] the FreeBSD jail doesn't achieve true virtualization because it doesn't allow the virtual machines to run different kernel versions than that of the base system. All virtual servers share the same kernel [...]. There is no clustering or process migration capability included [...]."

    The document titled "Jails: Confining the omnipotent root" and publicated in March 2000: "The FreeBSD "Jail" facility provides the ability to partition the operating system environment, while maintaining the simplicity of the UNIX "root" model. In Jail, users with privilege find that the scope of their requests is limited to the jail, allowing system administrators to delegate management capabilities for each virtual machine environment. Creating virtual machines in this manner has many potential uses; the most popular thus far has been for providing virtual machine services in Internet Service Provider environments.
    [...]
    [...] we describe the new FreeBSD "Jail" facility, which provides a strong partitioning solution, leveraging existing mechanisms, such as chroot(2), to what effectively amounts to a virtual machine environment. [...]
    [...] each Jail is a virtual FreeBSD environment permitting local policy to be independently managed, with much the same properties as the main system itself [...].
    [...]
    Jail takes advantage of the existing chroot(2) behaviour to limit access to the file system name-space for jailed processes. When a jail is created, it is bound to a particular file system root. [...]
    [...]
    The change of the suser(9) API modified approx 350 source lines [...]. The vast majority of these changes were generated automatically with a script.
    The implementation of the jail facility added approx 200 lines of code in total [...] and about 200 lines in two new kernel files."

    Comment
    FreeBSD jail was introduced in FreeBSD 4.0 on the 14th of March of 2000 or 4 months after the presentation of our Evoos.
    But according to a claim of one of the main developers, talks with an Internet Service Provider (ISP) about jail already began in 1999. Maybe, but so also did talks about our Evoos.

    There exists no hardware virtualization in case of FreeBSD jail, because only a single operating system is existing and running on a real machine. Like chroot jail it does not virtualize anything at all, but merely partitions the operating system environment and manages the access to the file system namespace to provide one or more environments for proper virtual machines (see also the Investigations::Multimedia of the 16th of March 2019).
    Honestly, using the terms virtualization and virtual machine in the sense of system monitor, system partition, multi-user or multi-tasking management, process isolation, or other related standard functionalities of an operating system (os) is misleading or even just wrong, and seems to be said for marketing reasons, because hardware virtualization was a trend at that time.

    According to an online encyclopedia, "[v]irtual disks and virtual drives are common components of virtual machines in hardware virtualization, but they are also widely used for various purposes unrelated to virtualization, such as for the creation of logical disks."
    Correspondingly, one should better call FreeBSD jail a logical machine environment instead of a virtual machine environment.

    As already discussed in the Investigations::Multimedia of the 16th of March 2019, (chroot) jails are fundamentally different from containers, control groups (cgroups), and namespaces.

    Virtual Private Server (VPS)
    An online encyclopedia about the field of Virtual Private Server (VPS): "A virtual private server (VPS) is a virtual machine sold as a service by an Internet hosting service. [...]
    [...] For many purposes they are functionally equivalent to a dedicated physical server, and being software-defined, are able to be much more easily created and configured.
    [...]
    [...] The physical server typically runs a hypervisor which is tasked with creating, releasing, and managing the resources of "guest" operating systems, or virtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources save for those allocated to it by the hypervisor. As a VPS runs its own copy of its operating system, customers have superuser-level access to that operating system instance, and can install almost any software that runs on the OS [...]."

    Open Virtuozzo (OpenVZ)
    An online encyclopedia about OpenVZ: "OpenVZ (Open Virtuozzo) is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and [Linux Containers] LXC.
    [...]
    While virtualization technologies such as VMware and Xen provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.[1]
    [...]
    OpenVZ is limited to providing only some [Virtual Private Network (]VPN[)] technologies based on [Point-to-Point Protocol (]PPP[)] (such as PPTP/L2TP) and [network] TUN8nel]/network]TAP. [Internet Protocol security (]IPsec[)] is supported inside containers since kernel 2.6.32 [ released on the 3rd of December 2009]."

    Comment
    The initial public prerelease of Virtuozzo was named ASPcomplete and released in August 2000.

    Linux Containers (LXC)
    An online encyclopedia about LXC: "LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
    The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also namespace isolation functionality that allows complete isolation of an application's view of the operating environment, including process trees, networking, user IDs and mounted file systems.[3]
    LXC combines the kernel's cgroups and support for isolated namespaces to provide an isolated environment for applications. Early versions of Docker used LXC as the container execution driver, though LXC was made optional in v0.9 and support was dropped in Docker v1.10. [4]
    [...]
    LXC provides operating system-level virtualization through a virtual environment that has its own process and network space, instead of creating a full-fledged virtual machine. LXC relies on the Linux kernel cgroups functionality that was released in version 2.6.24. It also relies on other kinds of namespace isolation functionality, which were developed and integrated into the mainline Linux kernel.
    [...]
    LXC is similar to other OS-level virtualization technologies on Linux such as OpenVZ and Linux-VServer, as well as those on other operating systems such as FreeBSD jails, AIX Workload Partitions and Solaris Containers.
    [...]
    Developer(s) Kernel: Virtuozzo, IBM, Google, [...] and others
    Initial release August 6, 2008 [...]"

    Virtual kernel architecture (vkernel)
    An online encyclopedia about the field of virtual kernel architecture (vkernel): "A virtual kernel architecture (vkernel) is an operating system virtualisation paradigm where kernel code can be compiled to run in the user space, for example, to ease debugging of various kernel-level components,[3][4][5] in addition to general-purpose virtualisation and compartmentalisation of system resources. It is used by DragonFly BSD in its vkernel implementation since DragonFly 1.7,[2] having been first revealed in September 2006 (13 years ago),[3][6] and first released in the stable branch with DragonFly 1.8 in January 2007 (13 years ago).[1][7][8][9] The long-term goal, in addition to easing kernel development, is to make it easier to support internet-connected computer clusters without compromising local security.[3][4] Similar concepts exist in other operating systems as well; in Linux, a similar virtualisation concept is known as user-mode Linux;[10][7] whereas in NetBSD since the summer of 2007, it has been the initial focus of the rump kernel infrastructure.[11 [Introduce RUMPs - Runnable Userspace Meta-Programs. [2007]]]
    [...]
    The vkernel concept is different from FreeBSD jail in that jail is only meant for resource isolation, and cannot be used to develop and test new kernel functionality in the userland, because each jail is sharing the same kernel.[7] (DragonFly, however, still has FreeBSD jail support as well.[7])
    In DragonFly, the vkernel can be thought of as a first-class computer architecture, comparable to i386 or amd64 [...]."

    Comment
    See the Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE) approach.
    We also got the confirmation that FreeBSD "jail is only meant for resource isolation", and therefore is not sufficient for (kernel) process isolation and eventually is not another operating system-level virtualization or containerization technology, as wrongly claimed in the quoted descriptions about operating system-level virtualization, FreeBSD jail, and LXC (see above).
    Furthermore, vkernel alone and in combination with FreeBSD jail does not result in operating system-level virtualization or containerization. In contrast, our kernel-less reflective, fractal, holonic Ontologic System with nanokernel and microkernel integration already integrates all in one.
    We also recall once again that the NetBSD rump kernel is only the first implementation of the so-called anykernel concept presented before with our OS, where drivers either can be compiled into in the monolithic kernel or in user space on top of a light-weight kernel and run in both.

    Linux-VServer
    An online encyclopedia about Linux-VServer: "Linux-VServer is a virtual private server implementation that was created by adding operating system-level virtualization capabilities to the Linux kernel. [...] It is not related to the Linux Virtual Server project, which implements network load balancing.
    Linux-VServer is a jail mechanism in that it can be used to securely partition resources on a computer system (such as the file system, CPU time, network addresses and memory) in such a way that processes cannot mount a denial-of-service attack on anything outside their partition.
    Each partition is called a security context, and the virtualized system within it is the virtual private server.
    [...]
    Virtual private servers are commonly used in web hosting services, where they are useful for segregating customer accounts, pooling resources and containing any potential security breaches."

    Orchestration
    An online encyclopedia about the field of orchestration: "Orchestration is the automated configuration, coordination, and management of computer systems and software.[1]
    [...]
    Orchestration is often discussed in the context of service-oriented architecture, [(hardware)] virtualization, provisioning, converged infrastructure and dynamic data center topics. Orchestration in this sense is about aligning the business request with the applications, data, and infrastructure.[4 [A Business Resolution Engine for Cloud Marketplaces. [2011]]]
    The main difference between a workflow "automation" and an "orchestration" (in the context of cloud computing) is that workflows are processed and completed as processes within a single domain for automation purposes, whereas orchestration includes a workflow and provides a directed action towards larger goals and objectives.[1]
    In this context, and with the overall aim to achieve specific goals and objectives (described through quality of service parameters), for example, meet application performance goals using minimized cost[5 [Auto-scaling to minimize cost and meet application deadlines in cloud workflows. [2011]]] and maximize application performance within budget constraints,[6 [Scaling and Scheduling to Maximize Application Performance within Budget Constraints in Cloud Workflows. [2013]]]"

    Comment
    Obviously, the field of Cloud Computing of the second generation (CC 2.0) got its momentum around the year 2008 to 2011, which corresponds with the time gap between the years 2005/2006 to 2008. But indeed, Service-Oriented Computing of the first generation (SOC 1.0) existed before this period.

    Software-Defined Networking (SDN)
    An online encyclopedia about the Software-Defined Networking (SDN) technology: "Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring making it more like cloud computing than traditional network management.[1] SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers which are considered as the brain of SDN network where the whole intelligence is incorporated. However, the intelligence centralization has its own drawbacks when it comes to security,[1] scalability and elasticity[1] and this is the main issue of SDN.
    SDN was commonly associated with the OpenFlow protocol (for remote communication with network plane elements for the purpose of determining the path of network packets across network switches) since the latter's emergence in 2011. However, since 2012[2][3] OpenFlow for many companies is no longer an exclusive solution, they added proprietary techniques. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform.
    [Software-Defined networking in a Wide Area Network (SD-WAN) applies similar technology to a wide area network (WAN).[4 [Predicting SD-WAN Adoption. [2015]]]
    [...]

    History
    The history of SDN principles can be traced back to the separation of the control and data plane first used in the public switched telephone network as a way to simplify provisioning and management well before this architecture began to be used in data networks.
    The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and forwarding functions in a proposed interface standard published in 2004 appropriately named "Forwarding and Control Element Separation" (ForCES).[8] The ForCES Working Group also proposed a companion SoftRouter Architecture.[9 [The SoftRouter Architecture. [2004]]] Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP Services Protocol[10 [Linux Netlink as an IP Services Protocol. [2003]]] and A Path Computation Element (PCE)-Based Architecture.[11]
    [...]
    The use of [...] software in split control/data plane architectures traces its roots to the Ethane project at Stanford's computer sciences department. Ethane's simple switch design led to the creation of OpenFlow.[12 [Ethane: Taking Control of the Enterprise. [2007]]] An API for OpenFlow was first created in 2008.[13] That same year witnessed the creation of NOX - an operating system for networks.[14 [NOX: Towards an Operating System for Networks. [2008]]]
    [...]
    Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, co-developed with NTT and Google. A notable deployment was Google's B4 deployment in 2012.[17][18] Later Google acknowledged their first OpenFlow with Onix deployments in their datacenters at the same time.[19] Another known large deployment is at China Mobile.[20]
    The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow.

    Concept
    SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.[23]
    The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:

  • Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • [...]

    The need for a new network architecture
    The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures.[24] Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments.[25] Some of the key computing trends driving the need for a new network paradigm include:
    Changing traffic patterns
    Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of "east-west" machine-to-machine traffic before returning data to the end user device in the classic "north-south" traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device (including their own), connecting from anywhere, at any time. Finally, many enterprise data centers managers are contemplating a utility computing model, which might include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide area network.
    The "consumerization of IT" The rise of cloud services
    Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Enterprise business units now want the agility to access applications, infrastructure, and other IT resources on demand and à la carte. To add to the complexity, IT's planning for cloud services must be done in an environment of increased security, compliance, and auditing requirements, along with business reorganizations, consolidations, and mergers that can change assumptions overnight. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage, and network resources, ideally from a common viewpoint and with a common suite of tools.
    "Big data" means more bandwidth
    Handling today's "big data" or mega datasets requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of mega datasets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity without going broke.[26]

    SDN Control Plane
    Centralized - Hierarchical - Distributed
    The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions,[28][29 [Design considerations for managing wide area software defined networks. [2014]]] distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches,[30 [Onix: A Distributed Control Platform for Large scale Production Networks, [2010]]][31 [Adaptive Resource Management and Control in Software Defined Networks. [2015]]] controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications.
    Controller Placement
    A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices,[32 [The Controller Placement Problem. [2012]]] especially in the context of large networks. Other objectives that have been considered involve control path reliability,[33 [On the placement of controllers in software-defined networks. [2012]]] fault tolerance,[34 [Five nines of southbound reliability in software defined networks. [2014]]] and application requirements.[35]

    SDN flow forwarding (sdn)
    Proactive vs Reactive vs Hybrid[36 [OpenFlow: Proactive vs Reactive. [2013]]][37 [Reactive, Proactive, Predictive: SDN Models. [2012]]]
    [...]

    Applications
    SDMN
    Software-defined mobile networking (SDMN)[38 [Mobileflow: Toward software-defined mobile networks. [2013]]][39 [Software Defined Mobile Networks (SDMN): Beyond LTE Network Architecture. [2015]]] is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network.[40 [SDN and NFV Integration in Generalized Mobile Network Architecture. [2015]]] It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities.[41] [...]
    SD-WAN
    An SD-WAN is a Wide Area Network (WAN) managed using the principles of software-defined networking.[42] [...] Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration.[43]
    SD-LAN
    An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service.[44] SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller.[45 [Aerohive Introduces the Software-defined LAN. [2016]]]
    Security using the SDN paradigm
    SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. [...]
    [...]

    Relationship to NFV
    NFV Network Function Virtualization is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV disunites software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF).[64 [Foundations of Modern Networking: SDN, NFV, QoE, IoT, and Cloud. [2016]]] SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies.[64][65 [An Agile Internet of Things (IoT) based Software Defined Network (SDN) Architecture. [2018]]] SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.[66 [Platform to Multivendor Virtual and Physical Infrastructure]]

    Relationship to DPI
    DPI Deep Packet Inspection provides network with application-awareness, while SDN provides applications with network-awareness.[67 [The Role Of DPI In An SDN World. [2012]]] Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances [68 [Global Information Infrastructure, Internet Protocol Aspects And NextGeneration Networks. [2015]]]
    [A graphic of the document titled "SDN Architecture Overview" shows boxes labelled "CDPI Agent" and "NBI Agent".]"

    Comment
    We summarize some terms used to describe SDN:

  • brain,
  • network intelligence,
  • centralized (initially), hierarchical, and decentralized,
  • reactive, proactive, predictive,
  • network operating system,
  • agent,
  • cloud computing,
  • CPS, IoT, and NES,
  • ....
  • ON, OW, and OV, and
  • our OS finally, that integrates all in one.

    See also The Proposal about our Evolutionary operating system (Evoos) for the following chapters:

  • chapter 2.2.1 Components of Operating Systems
    "Among the components of most operating systems are (according to [Sil[berschatz, Galvin: Operating System Concepts. ...],1994]):
    • the management of processes,
    • the administration of the main memory,
    • the management of non-volatile memory,
    • the file management,
    • the security system,
    • the network functions and
    • the system of the command interpreter"
  • chapter 2.4 Virtual Machine
  • chapter 3.2 Functioning of a Brain
    "According to these findings, the model of a permanently connected network is no longer tenable (see [Chevallier]).
    [...]
    The idea of a modern computer-like operating system of a brain is linked to the proposal of Johnson-Laird (see [Joh[nson-Laird, Philip: Mental minds. ...], 1983]). The designed system corresponds to a self-organizing neural network. Possible deficits in a subnetwork do not stop the actions occurring in the whole network."
  • chapter 5 Summary
    "[...]
  • the hearing - the microphone, the network card and the modem
  • [...]
  • the speaking - the loudspeaker, the network card and the modem"

    Also note our magic, which adds and integrates

  • SoftBionics (SB),
  • Evoos,
  • SOx (SOC, SOP, SOA, and mSOA),
  • computing and networking, for example
    • operating system (os), including
      • reflective os,
      • distributed os,
      • real-time os,
      • Agent-Based Operating System (ABOS),
      • Kernel-Less Operating System (KLOS),
      • and so on,
    • High Performance and High Productivity Computing Systems (HP²CSs),
    • High-Throughput Computing (HTC),
    • Many-Task Computing (MTC),
    • Distributed Systems (DSs), including
      • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs),

      and

    • Software-Defined technologies (SDx) (e.g. SDN, SDWAN, SDLAN, and SDMN),
  • and all of the rest in one.

    Specifically a reflective KLOS is able to change all network functions, including the driver of the network hardware, at runtime and hence includes SDx.

    OpenFlow
    An online encyclopedia about OpenFlow: " OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network.[1 [OpenFlow: Enabling innovation in campus networks. [2008]]]
    OpenFlow enables network controllers to determine the path of network packets across a network of switches. The controllers are distinct from the switches. This separation of the control from the forwarding allows for more sophisticated traffic management than is feasible using access control lists (ACLs) and routing protocols. [...] The protocol's inventors consider OpenFlow an enabler of software-defined networking (SDN).
    [...]
    OpenFlow allows remote administration of a layer 3 switch's packet forwarding tables, by adding, modifying and removing packet matching rules and actions. [...]
    [...]
    The OpenFlow protocol is layered on top of the Transmission Control Protocol (TCP) and prescribes the use of Transport Layer Security (TLS). [...]
    [...]

    History
    The Open Networking Foundation (ONF), a user-led organization dedicated to promotion and adoption of software-defined networking (SDN),[5] manages the OpenFlow standard.[6] ONF defines OpenFlow as the first standard communications interface defined between the control and forwarding layers of an SDN architecture. OpenFlow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based). It is the absence of an open interface to the forwarding plane that has led to the characterization of today's networking devices as monolithic, closed, and mainframe-like. A protocol like OpenFlow is needed to move network control out of proprietary network switches and into control software that's open source and locally managed.[7 [Software-Defined Networking (SDN): The New Norm for Networks]]
    [...]
    In April 2012, Google[...] described how the company's internal network had been completely re-designed over the previous two years to run under OpenFlow with substantial efficiency improvement.[22 [Going With the Flow: Google's Secret Switch to the Next Wave of Networking. [2012]]]
    In January 2013, NEC unveiled a virtual switch for Microsoft's Windows Server 2012 Hyper-V hypervisor, which is designed to bring OpenFlow-based software-defined networking and network virtualisation to those Microsoft environments.[23]"

    Network architecture
    An online encyclopedia about the field of network architecture: "Network architecture is the design of a computer network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as communication protocols used.
    In telecommunication, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated.
    The network architecture of the Internet is predominantly expressed by its use of the Internet Protocol Suite, rather than a specific model for interconnecting networks or nodes in the network, or the usage of specific types of hardware links.
    [...]

    Distributed computing
    In distinct usage in distributed computing, the network architecture often describes the structure and classification of a distributed application architecture, as the participating nodes in a distributed application are often referred to as a network. For example, the applications architecture of the public switched telephone network (PSTN) has been termed the Intelligent Network. There are any number of specific classifications but all lie on a continuum between the dumb network (e.g. Internet) and the intelligent network (e.g. the telephone network).
    A popular example of such usage of the term in distributed applications, as well as PVCs (permanent virtual circuits), is the organization of nodes in peer-to-peer (P2P) services and networks. P2P networks usually implement overlay networks running over an underlying physical or logical network. These overlay networks may implement certain organizational structures of the nodes according to several distinct models, the network architecture of the system."

    Network virtualization
    An online encyclopedia about the field of network virtualization: "In computing, network virtualization or network virtualisation is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform [respectively hardware] virtualization, often combined with resource virtualization.
    Network virtualization is categorized as either external virtualization, combining many networks or parts of networks into a virtual unit, or internal virtualization, providing network-like functionality to software containers on a single network server

    Components
    Various equipment and software vendors offer network virtualization by combining any of the following:

  • Network hardware, such as switches and network adapters, also known as network interface cards (NICs)
  • Network elements, such as firewalls and load balancers
  • Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
  • Network storage devices
  • Network machine-to-machine elements, such as telecommunications devices
  • Network mobile elements, such as laptop computers, tablet computers, and smart phones
  • Network media, such as Ethernet and Fibre Channel

    External virtualization
    External network virtualization combines or subdivides one or more local area networks (LANs) into virtual networks to improve a large network's or data center's efficiency. A virtual local area network (VLAN) and network switch comprise the key components. Using this technology, a system administrator can configure systems physically attached to the same local network into separate virtual networks. Conversely, an administrator can combine systems on separate local area networks (LANs) into a single VLAN spanning segments of a large network.

    Internal virtualization
    Internal network virtualization configures a single system with software containers, such as Xen hypervisor control programs, or pseudo-interfaces, such as a VNIC, to emulate a physical network with software. This can improve a single system's efficiency by isolating applications to separate containers or pseudo-interfaces.[1]"

    Network Functions Virtualization (NFV)
    An online encyclopedia about the field of Network Functions Virtualization (NFV): "Network functions virtualization (also network function virtualization or NFV)[1] is a network architecture concept that uses the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create communication services.
    NFV relies upon, but differs from, traditional server-virtualization techniques, such as those used in enterprise IT. A virtualized network function, or VNF, may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage devices, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function.
    [...]

    Background
    Product development within the telecommunication industry has traditionally followed rigorous standards for stability, protocol adherence and quality, reflected by the use of the term carrier grade to designate equipment demonstrating this reliability.[3] While this model worked well in the past, it inevitably led to long product cycles, a slow pace of development and reliance on proprietary or specific hardware, e.g., bespoke application-specific integrated circuits (ASICs). The rise of significant competition in communication services from fast-moving organizations operating at large scale on the public Internet (such as Google Talk, Skype, Netflix) has spurred service providers to look for ways to disrupt the status quo.

    History
    In October 2012, a specification group, "Network Functions Virtualisation",[4] published a white paper[5] at a conference in Darmstadt, Germany, on software-defined networking (SDN) and OpenFlow. The group, part of the European Telecommunications Standards Institute (ETSI), was made up of representatives from the telecommunication industry from Europe and beyond.[6 [Tier 1 Carriers Tackle Telco SDN. [2012]]][7] Since the publication of the white paper, the group has produced over 100 publications.[8 [Standards for NFV: Network Functions Virtualisation]]

    Industry impact
    NFV has proven a popular standard even in its infancy. Its immediate applications are numerous, such as virtualization of mobile base stations, platform as a service (PaaS), content delivery networks (CDN), fixed access and home environments.[22] The potential benefits of NFV is anticipated to be significant. Virtualization of network functions deployed on general purpose standardized hardware is expected to reduce capital and operational expenditures, and service and product introduction times.[23][24] Many major network equipment vendors have announced support for NFV.[25] This has coincided with NFV announcements from major software suppliers who provide the NFV platforms used by equipment suppliers to build their NFV products.[26][27]
    However, to realize the anticipated benefits of virtualization, network equipment vendors are improving IT virtualization technology to incorporate carrier-grade attributes required to achieve high availability, scalability, performance, and effective network management capabilities.[28] To minimize the total cost of ownership (TCO), carrier-grade features must be implemented as efficiently as possible. This requires that NFV solutions make efficient use of redundant resources to achieve five-nines availability (99.999%),[29] and of computing resource without compromising performance predictability.
    The NFV platform is the foundation for achieving efficient carrier-grade NFV solutions.[30] It is a software platform running on standard multi-core hardware and built using open source software that incorporates carrier-grade features. The NFV platform software is responsible for dynamically reassigning VNFs due to failures and changes in traffic load, and therefore plays an important role in achieving high availability. There are numerous initiatives underway to specify, align and promote NFV carrier-grade capabilities such as ETSI NFV Proof of Concept,[31] ATIS[32] Open Platform for NFV Project,[33] Carrier Network Virtualization Awards[34] and various supplier ecosystems.[35]
    The vSwitch, a key component of NFV platforms, is responsible for providing connectivity both VM-to-VM (between VMs) and between VMs and the outside network. Its performance determines both the bandwidth of the VNFs and the cost-efficiency of NFV solutions. The standard Open vSwitch's (OVS) performance has shortcomings that must be resolved to meet the needs of NFVI solutions.[36] Significant performance improvements are being reported by NFV suppliers for both OVS and Accelerated Open vSwitch (AVS) versions.[37][38]
    Virtualization is also changing the way availability is specified, measured and achieved in NFV solutions. As VNFs replace traditional function-dedicated equipment, there is a shift from equipment-based availability to a service-based, end-to-end, layered approach.[39][40] Virtualizing network functions breaks the explicit coupling with specific equipment, therefore availability is defined by the availability of VNF services. Because NFV technology can virtualize a wide range of network function types, each with their own service availability expectations, NFV platforms should support a wide range of fault tolerance options. This flexibility enables CSPs to optimize their NFV solutions to meet any VNF availability requirement.

    Management and orchestration (MANO)
    ETSI has already indicated that an important part of controlling the NFV environment be done through automation and orchestration. There is a separate stream MANO within NFV outlining how flexibility should be controlled.[41]
    ETSI delivers a full set of standards enabling an open ecosystem where Virtualized Network Functions (VNFs) can be interoperable with independently developed management and orchestration systems, and where the components of a management and orchestration system are themselves interoperable."

    Comment
    See the distributed operating system Apertos (Muse) and the Cognac system based on Apertos, as well as their integration with virtualization, actor, agent, actoragent, blackboard system, SBA, grid computing 1.0 and cloud computing 1.0, and so on, which shows a further time that NFV is indeed a part of our OS.
    Even if not everything has been named by us or can be resolved by us, the development respectively implementation of the related parts of our OS shows that our OS is always the foundation or source of inspiration or blueprint. For example, NFV is already grouped together with carrier cloud and telco cloud besides public and private cloud, SDN-controlled WAN, legacy WAN, and other multivendor network domains (physical and virtual), and connected with intelligent automation, orchestration, Data Science and Analytics (DSA), proactive Ops, SD-WAN/NFV, mSOA, and DevOps (see once again the links given at the beginning of this clarification).

    Open Networking Foundation
    An online encyclopedia about the Open Networking Foundation: "The Open Networking Foundation (ONF) is a nonprofit trade organization, funded by companies such as Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo! aimed at promoting networking through software-defined networking (SDN) and standardizing the OpenFlow protocol and related technologies.[2 [Open Networking Foundation Formed to Speed Network Innovation. [2011]]] The standards-setting and SDN-promotion group was formed out of recognition that cloud computing will blur the distinctions between computers and networks.[3] The initiative was meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas.[4]
    [...]
    Google's adoption of OpenFlow software was discussed by Urs Hölzle at a trade show promoting OpenFlow in April, 2012.[6][7] Hölzle is the chairman ONF's board of directors, serving on the board along with representatives of the other five founding board members plus NTT Communications and Goldman Sachs. Stanford University professor Nick McKeown and U.C. Berkeley professor Scott Shenker also serve on the board as founding directors representing themselves.[1]
    In 2016 the ONF announced it would merge with the Open Networking Lab (ON.Lab).[8 [Open Networking Foundation and ON.Lab to Merge to Accelerate Adoption of SDN - Open Networking Foundation]] The resulting entity retained the ONF name in 2017.[9 [Open Networking Foundation Unveils New Open Innovation Pipeline to Transform Open Networking. [2017]]]"

    Comment
    The list of funding companies and the goals and activities explain a lot, too. Is not it?
    Note the similaritiy between OntoLab and Open Networking Lab (ON.Lab) to confuse the public in this way as well.
    Also note that the statement "cloud computing will blur the distinctions between computers and networks" points to Cloud Computing of the second generation (CC 2.0) respectively the related parts of our ON, OW, and OV of our OS.
    Oh, yes, do not forget to mention: "The network is the computer.", [Sun Microsystems, 1990s]

    Open Network Operating System (ONOS)
    An online encyclopedia about the Open Network Operating System (ONOS): "The ONOS (Open Network Operating System) project is an open source community hosted by The Linux Foundation. The goal of the project is to create a software-defined networking (SDN) operating system for communications service providers that is designed for scalability, high performance and high availability.

    History
    On December 5, 2014, the Open Networking Lab (ON.Lab) along with other industry partners including AT&T and NTT Communications released the ONOS source code to start the open source community.[1] On October 14, 2015, the Linux Foundation announced that ONOS had joined the organization as one of its collaborative projects.[2]
    The project was started around October 2012 under the leadership of [...] an architect at ON.Lab. The name ONOS was coined around end of 2012 [...]. Early prototype was shown on April, 2013[3] at Open Networking Summit (ONS) and journey[4] of initial iterations featured at ONS 2014.

    Technology Overview
    The software is written in Java and provides a distributed SDN applications platform atop Apache Karaf OSGi container. The system is designed to operate as a cluster of nodes that are identical in terms of their software stack and can withstand failure of individual nodes without causing disruptions in its ability to control the network operation.
    [...]
    While ONOS leans heavily on standard protocols and models, e.g. OpenFlow, NETCONF, OpenConfig, its system architecture is not directly tied to them. Instead, ONOS provides its own set of high-level abstractions and models, which it exposes to the application programmers. These models can be extended by the applications at run-time. To prevent the system from becoming tied to a specific configuration or control protocol, any software in direct contact with protocol-specific libraries and engaging in direct interactions with network environment is deliberately isolated into its own tier referred to as a provider or a driver. Likewise, any software in direct contact with intra-cluster communication protocols is deliberately isolated into its own tier referred to as a store.
    The platform provides applications with a number of high-level abstractions, through which the applications can learn about the state of the network and through which they can control the flow of traffic through the network. The network graph abstraction provides information about the structure and topology of the network.
    [...]
    Applications (core extensions) can be loaded and unloaded dynamically, via REST API or GUI, and without the need to restart the cluster or its individual nodes. ONOS application management subsystem assumes the responsibility for distributing the application artifacts throughout the cluster to assure that all nodes are running the same application software.
    [...]

    Members
    There are two tiers of membership for ONOS: Partner and Collaborator, with varying levels of commitment.
    Partners

  • AT&T
  • China Unicom
  • Ciena
  • Cisco [Systems]
  • Comcast
  • Deutsche Telekom
  • Ericsson
  • Fujitsu
  • Google
  • Huawei
  • Intel
  • NEC
  • Nokia
  • NTT Communications
  • Radisys
  • Samsung Electronics
  • Türk Telekom
  • Verizon"

    Comment
    Defintiely, this combination and integration of

  • operating system-level virtualization or containerization,
  • network functions of an operating system, and
  • Service-Oriented technologies (SOx), including SOC, SOP, SOA, and mSOA,

    is a part of our OS and is only allowed as open software but not as free software, despite that the Service-Oriented Programming (SOP) paradigm or model was originally designed for Inter-Process Communication (IPC) but our Evoos has all characteristic elements as well, as is the case with the other related matter.
    The conceptual and hence legal problem with ONOS is that it is an operating system (os) and it does not matter how our original and unique composition and integration according to the OSA is realized. It is obvious that our OS has been taken as source of inspiration and blueprint, and therefore a causal link with our OS cannot be avoided in this way.
    In addition, ONOS is merely SOA, but not mSOA that has been superseded SOA. If it would be mSOA, then it would be a part of our OS anyway.
    Despite of this we ask the question why not putting it directly into the os as in the case with NFV.
    This leads to the question why not taking a KLOS or a microkernel, and even ditch the additional SOA or mSOA, and make mSOA as kernel services. Yes, indeed, as we do with our OS.
    We also find the list of the members highly interesting, because there are not much of the very well known actors missing.

    Apache Karaf
    An online encyclopedia about Apache Karaf: "Apache Karaf is a modular open source OSGi (Release 6[2]) runtime environment.[3] The project became a top level project on 2010, previously being a subproject of Apache ServiceMix.[4]
    [...]
    Karaf Runtime
    Karaf Container is a modern and polymorphic container. It's a lightweight, powerful, and enterprise ready container powered by OSGi. By polymorphic, it means that Karaf can host any kind of applications : [Open Services Gateway initiative (]OSGi[)], Spring [Framework], [Web application ARchive (]WAR[) file format], and much more. Karaf can be used as a standalone container, or in a bootstrap way using Karaf Boot.
    Karaf Cellar
    Karaf Cellar is a clustering solution for Karaf. It allows you to manage multiple instances, with synchronization between the instances."

    Comment
    Read also the comment made in relation to the quote about the Spring Framework below.

    Apache ServiceMix
    An online encyclopedia about Apache ServiceMix: "Apache ServiceMix is an enterprise-class open-source distributed enterprise service bus (ESB).

    Architecture
    It is based on the service-oriented architecture (SOA) model. It is a project of the Apache Software Foundation and was built on the semantics and application programming interfaces of the Java Business Integration (JBI) specification JSR 208 [publicated around 2005]. [...] ServiceMix fully supports the OSGi framework. ServiceMix is lightweight and easily embeddable, has integrated Spring Framework support and can be run at the edge of the network (inside a client or server), as a standalone ESB provider or as a service within another ESB. ServiceMix is compatible with Java SE or a Java EE application server. ServiceMix uses ActiveMQ to provide remoting, clustering, reliability and distributed failover. The basic frameworks used by ServiceMix are Spring and XBean.[5]
    ServiceMix is composed the latest versions of Apache ActiveMQ, Apache Camel, Apache CXF, and Apache Karaf. It was accepted as an official Apache project by the ASF Board of Directors on September 19, 2007.[6 [Announcement]]"

    Spring Framework
    An online encyclopedia about Spring Framework: "The Spring Framework is an application framework and inversion of control container for the Java platform. The framework's core features can be used by any Java application, but there are extensions for building web applications on top of the Java EE (Enterprise Edition) platform. Although the framework does not impose any specific programming model, it has become popular in the Java community as an addition to, or even replacement for the Enterprise JavaBeans (EJB) model. [...]
    [...]
    Spring 5 is announced to be built upon Reactive Streams [] compatible Reactor Core.[11]"

    Comment
    Indeed, the Spring Framework exists since the year 2002, but not the Spring Cloud and its support for asynchronous streaming (see the quote about Reactive Streams below), which came after the start of our OS.

    Reactive Streams
    An online encyclopedia about Reactive Streams: "Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.[1]

    Origin
    Reactive Streams started as an initiative in late 2013 between engineers at Netflix, Pivotal and Lightbend. Some of the earliest discussions began in 2013 between the Play and Akka teams at Lightbend.[2][3] Lightbend is one of the main contributors of Reactive Streams.[4] Other contributors include Red Hat, Oracle, Twitter and spray.io.[5]

    Goals
    The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary - like passing elements on to another thread or thread-pool - while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, back pressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded."

    Service-Oriented Architecture (SOA)
    An online encyclopedia about the field of Service-Oriented Architecture (SOA): " Service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. An SOA service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. SOA is also intended to be independent of vendors, products and technologies.[1 [Chapter 1: Service Oriented Architecture (SOA). [2016]]] A service has four properties according to one of many definitions of SOA:[2 [Service-Oriented Architecture Standards - The Open Group]]
    1. It logically represents a business activity with a specified outcome.
    2. It is self-contained.
    3. It is a black box for its consumers, meaning the consumer does not have to be aware of the service's inner workings.
    4. It may consist of other underlying services.[3 [What Is SOA? [2016]]]
    Different services can be used in conjunction to provide the functionality of a large software application,[4 [Cloud Computing: A Practical Approach. [2010]]] a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately maintained and deployed software components. It is enabled by technologies and standards that facilitate components' communication and cooperation over a network, especially over an IP network.
    SOA is related to the idea of an application programming interface (API), an interface or communication protocol between different parts of a computer program intended to simplify the implementation and maintenance of software. An API can be thought of as the service, and the SOA the architecture that allows the service to operate.

    Overview
    In SOA, services use protocols that describe how they pass and parse messages using description metadata. This metadata describes both the functional characteristics of the service and quality-of-service characteristics.
    [...]

    Defining concepts
    The related buzzword service-orientation promotes loose coupling between services. SOA separates functions into distinct units, or services,[6] which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services.[7]
    A manifesto was published for service-oriented architecture in October, 2009. This came up with six core values which are listed as follows:[8]
    1. Business value is given more importance than technical strategy.
    2. Strategic goals are given more importance than project-specific benefits.
    3. Intrinsic inter-operability is given more importance than custom integration.
    4. Shared services are given more importance than specific-purpose implementations.
    5. Flexibility is given more importance than optimization.
    6. Evolutionary refinement is given more importance than pursuit of initial perfection.
    SOA can be seen as part of the continuum which ranges from the older concept of distributed computing[6 [Introduction to Service-Oriented Modeling. Service-Oriented Modeling: Service Analysis, Design, and Architecture. [2008]]][9] and modular programming, through SOA, and on to current practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).[10]

    Implementation approaches
    Service-oriented architecture can be implemented with web services.[21 [Web Services-Oriented Architecture in Production in the Finance Industry. [2004]]] This is done to make the functional building-blocks accessible over standard Internet protocols that are independent of platforms and programming languages. These services can represent either new applications or just wrappers around existing legacy systems to make them network-enabled.[22]
    Implementers commonly build SOAs using web services standards. One example is SOAP, which has gained broad industry acceptance [...] in 2003. These standards (also referred to as web service specifications) also provide greater interoperability and some protection from lock-in to proprietary vendor software. One can, however, also implement SOA using any other service-based technology, such as Jini, [Common Object Request Broker Architecture (]CORBA[)] or [REpresentational State Transfer (]REST[)].
    Architectures can operate independently of specific technologies and can therefore be implemented using a wide range of technologies, including:
    [...]

    {rest of quotes coming} Web Service (WS), Business Process (BP), Web Services Business Process Execution Language (WS-BPEL)
    [...]

    Criticism
    SOA has been conflated with Web services;[30] however, Web services are only one option to implement the patterns that comprise the SOA style. In the absence of native or binary forms of remote procedure call (RPC), applications could run more slowly and require more processing power, increasing costs. Most implementations do incur these overheads, but SOA can be implemented using technologies (for example, Java Business Integration (JBI), Windows Communication Foundation (WCF) and data distribution service (DDS)) that do not depend on remote procedure calls or translation through XML. At the same time, emerging open-source XML parsing technologies (such as VTD-XML) and various XML-compatible binary formats promise to significantly improve SOA performance. Services implemented using JSON instead of XML do not suffer from this performance concern.[31][32][33]
    Stateful services require both the consumer and the provider to share the same consumer-specific context, which is either included in or referenced by messages exchanged between the provider and the consumer. This constraint has the drawback that it could reduce the overall scalability of the service provider if the service-provider needs to retain the shared context for each consumer. It also increases the coupling between a service provider and a consumer and makes switching service providers more difficult.[34] Ultimately, some critics feel that SOA services are still too constrained by applications they represent.[35 [SOA services still too constrained by applications they represent. [2009]]]
    A primary challenge faced by service-oriented architecture is managing of metadata. Environments based on SOA include many services which communicate among each other to perform tasks. Due to the fact that the design may involve multiple services working in conjunction, an Application may generate millions of messages. Further services may belong to different organizations or even competing firms creating a huge trust issue. Thus SOA governance comes into the scheme of things.[36]

    Extensions and variants
    Event-driven architectures
    Application programming interfaces
    Application programming interfaces (APIs) are the frameworks through which developers can interact with a web application.
    Web 2.0
    [...] A topic that has experienced extensive coverage involves the relationship between Web 2.0 and service-oriented architectures.[...]
    SOA is the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. The notion of complexity-hiding and reuse, but also the concept of loosely coupling services has inspired researchers to elaborate on similarities between the two philosophies, SOA and Web 2.0, and their respective applications. Some argue Web 2.0 and SOA have significantly different elements and thus can not be regarded "parallel philosophies", whereas others consider the two concepts as complementary and regard Web 2.0 as the global SOA.[39]
    The philosophies of Web 2.0 and SOA serve different user needs and thus expose differences with respect to the design and also the technologies used in real-world applications. However, as of 2008 2007, use-cases demonstrated the potential of combining technologies and principles of both Web 2.0 and SOA.[39 [Web 2.0 and SOA: Converging Concepts Enabling the Internet of Services. [2007]]]

    Microservices
    Microservices are a modern interpretation of service-oriented architectures used to build distributed software systems. Services in a microservice architecture[40] are processes that communicate with each other over the network in order to fulfill a goal. These services use technology agnostic protocols,[41] which aid in encapsulating choice of language and frameworks, making their choice a concern internal to the service. Microservices are a new realisation and implementation approach to SOA, which have become popular since 2014 (and after the introduction of DevOps), and which also emphasize continuous deployment and other agile practices.[42 [Microservices Architecture Enables DevOps: Migration to a Cloud-Native Architecture [2016]]]
    There is no single commonly agreed definition of microservices. The following characteristics and principles can be found in the literature:

  • fine-grained interfaces (to independently deployable services),
  • business-driven development (e.g. domain-driven design),
  • IDEAL cloud application architectures,
  • polyglot programming and persistence,
  • lightweight container deployment,
  • decentralized continuous delivery, and
  • DevOps with holistic service monitoring."

    From an online encyclopedia about the field of Service-Oriented Architecture (SOA) (translated German version): "Service-oriented architecture (SOA) [...] is an architectural pattern of information technology from the field of distributed systems to structure and use the services of IT systems. A special role is played by the orientation towards business processes, whose levels of abstraction are the basis for concrete service implementations: [...]. By assembling (orchestrating) services of lower abstraction levels, services of higher abstraction levels can be created in a very flexible way and with the greatest possible reusability.
    [...] [...] The costs of programming the nth application implemented with SOA should be eliminated, as all necessary services are already available and only need to be orchestrated. Thus only the costs for business analysis and software configuration remain.
    SOA requires a very strong integration of the individual IT components, so that their orchestration can be achieved cost-effectively. SOA therefore already plays a role in the selection of IT components.
    One technical form of implementing SOA is to offer these services on the Internet or in the cloud. The communication between such offered services can take place via SOAP, REST, XML-RPC or similar protocols. The user of these services only knows that the service is offered, what input is required and what kind of result is the result. Details about the way the result is determined do not have to be known.

    Definition
    The term "service-oriented architecture" was first used in 1996 by the market research company Gartner [1] Gartner is therefore considered the inventor of the term SOA. There is no generally accepted definition of SOA. Nevertheless, the OASIS definition from 2006 is often quoted:
    "a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains"[2 [Reference Architecture Foundation for Service Oriented Architecture Version 1.0 [2012]]]
    [...]
    The central theme of all definitions is services. In the following, the ideal-typical properties of services in an SOA are listed. In practice, not all of these requirements are fully met [4].

  • A service is an IT representation of business functionality [5].
  • A service is self-contained (autarkic [or self-sufficient]) and can be used independently.
  • A service is available in a network.
  • A service has a well-defined published interface (contract). For the use it is sufficient to know the interface. However, knowledge of the implementation details is not required.
  • A service is platform-independent, i.e. providers and users of a service can be realized in different programming languages on different platforms.
  • A service is registered in a directory.
  • A service is dynamically bound, i.e. when creating an application that uses a service, the service does not need to be present. It is localized and bound only when it is executed.
  • A service should be roughly granular to reduce dependency between distributed systems.

    Delimitation

  • SOA is not web services - SOA describes an architecture paradigm detached from concrete implementation methods and techniques.
  • SOA is not new - a service-oriented architecture could already be implemented years before the term was introduced with the methods and procedures available at that time and was used, among others, with CORBA in 1991.
  • SOA is not a solution for business problems - as an architecture paradigm, SOA does not give any recommendation for dealing with business problems. See also the section on criticism.
  • SOA is individual - there is no "standard SOA". A company must always tailor an SOA to its own needs.

    [...]

    Modeling of a SOA
    There are several ways to describe SOA with a modeling language. From the OMG there is the open source specification SoaML, with which SOA services can be represented by means of an extended UML profile using your own stereotypes.

    Technical implementation during the term
    The interaction between service provider and service user is based on the paradigm of (publish or register), find, bind, execute, [6].
    Publish or register
    The service provider shall publish or register his service in a directory.
    Find
    The software component that wants to use a service looks for it in a directory. If a suitable service is found, you can proceed to the next step.
    Bind
    The using component receives a reference (address) from the directory, under which it can access the service. The function call is bound to this address.
    Execute
    The service is called. Input parameters are transferred to the service and output parameters are returned in response to the call.

    Environment
    The term service-oriented architecture can be classified in the following environment:

  • Process Management (also Business Process Management, BPM): The definition of the business processes that are supported by IT.
  • IT Service Management (ITSM): Methods required to achieve the best possible support of business processes (BP) by the IT organization. The de facto standard known here is the IT Infrastructure Library (ITIL)."

    Comment
    The description has been bent already into the direction of our OS to confuse the public about our original and unique works of art. The best example seen so far by us is this presentation of a continuum and the statement that Cloud Computing (CC) could be seen as the offspring of SOA. If one takes on this view, then we already are at our claim and even the view that CC is part of our OS. We are content with CC 2.0 and all the related rest, which is everything added to CC 1.0 after the introduction of Amazon Elastic Compute Cloud (EC2).

    Here we see another unsuccessful attempt to integrate SOA 2005/2006 with Service-Oriented Computing of the first generation (SOC 1.0) and Service-Oriented Programming of the first generation (SOP 1.0), which should not be confused with SOC 2.0 and SOP 2.0, and Semantic Service-Oriented Computing (SSOC) and Semantic Service-Oriented Programming (SSOP), because they are our Ontologic Computing (OC) and Ontologic Programming (OP), as seen before with the threads

  • from HTC 1.0, GC 1.0, and CC 1.0 to HTC 2.0, Big Data Processing (BDP), GC 2.0, CC 2.0, edge computing and fog computing and
  • from CPS 1.0, IoT 1.0, and NES 1.0, as well as UbiC 1.0 to their second generation 2.0, and
  • the other threads of development.

    But now we have a relatively clear definition of GC 2.0 and CC 2.0, as we already have shown: It is about utilizing

  • SOx 1.0, and SOx 2.0 for other parts of a system than Enterprise Architecture (EA) and {other term with user-focused?} Business Process (BP) and
  • loosely coupled things, specifically applications, services, communications, active items, agents, self-contained things, hypervisors, Virtual Machines (VMs), Runtime Environments (REs), and , etc., etc., etc.,
  • from SOC, SOA, and Web 2.0 to Web 3.0, and so on,

    which all came with our OS, as far as we do know, besides the many other things.
    All SOx 1.0 and SOx 2.0 are related to Enterprise Architecture (EA), Business Process (BP), Web Service (WS), and Java Jini. {SOP 1.0 unit of work are (in-memory) services or service objects, and SOC 1.0 unit of deployment for services are components}
    But SOP 1.0 model was originally designed for Inter-Process Communication (IPC) and can be utilized for intra-process communication (e.g. between threads) as well, but our Evoos has all characteristic elements as well. The utilization for foundational technology (e.g. {other term?} system process) and extension with virtualization, automated multi-threading, etc., which led to CC 2.0 was done by us.
    Please do not confuse

  • an operating system, which in fact is merely a Runtime environment (RE) of a computing platform, which again is a software framework, and
  • cloud of services or service cloud

    as parts of a federated SOC platform with proper

  • operating system and
  • cloud computing.

    Also note that many companies in the field of networking and communications did not know what SOC and SOA truly are. This is also reflected in the time gap between the years 2005/2006 to 2008 and in the following years up to today, which (time gap) has been shown in this way once again.
    No it was, is, and will be our OS with its OSA, OSC, and also ON, OW, and OV. Period.

    Conclusion
    As we showed in the Clarification of the 18th of January 2020, the different development threads are converging, even those that looked at first sight as if there was no causal link with our OS.
    Not surprisingly, despite os-level virtualization systems existed before the start of our OS, such as for example FreeBSD jail (2000), Virtuozzo (2000) and OpenVZ (2005), Linux-VServer (2001), and Solaris Containers (2004), they were only part of cluster computing and some rudimentary grid and cloud computing systems of the first generation related to virtualization and renting computing power and storage. This only changed after we presented our OS with all the relevant elements as part of our grid and cloud computing of the second generation and also edge and fog computing, as well as the integration of SOx 3.0, SoftBionics (SB), Data Science and Analytics (DSA), Hight-Throughput Computing 2.0, Big Data Processing (BDP), and so on.
    And the time gap can be seen here once again, because after 2008 the momentum of the development on the basis of our OS increased once it was understood at least in parts.
    In addition, many properties of our OS are still missing.
    ...
    Also, all standards concerning our OS and Os are void if we were not asked.

    Of course, we are worth all that trust and money. :)

    Btw.: Note that we have not publicated new relevant matter all the years.
    Entities have no chance to steal our OS, as can be seen more and more easily, or being more precise, as is a matter of fact now.
    We demand all companies to remove the infringing Free and Open Source Software (FOSS) matter immediately.

    The tight has turned.


    23.January.2020

    02:00 UTC+1
    Comment of the Day

    "We know perfectly - everybody knows the rules! I don't like what you did in front of me. Go out, outside please. I'm sorry, we all know the rules. Nobody, nobody has to provoke. Nobody! Okay? [...] Please respect the rules as they are [in place] for centuries. They will not change with me! I can tell you. Okay? So everybody, respect the rules. Please.", [Emmanuel Macron, President of the French Republic, 22nd of January 2020]
    So we do, too.

    02:01 and 16:11 UTC+1
    SOPR #269

    *** Sketching mode ***
    Topics

  • Legal matter
  • License Model (LM)
  • Further steps

    Legal matter
    We can only apologize for not looking into the matter of the fields of

  • High-Throughput Computing (HTC),
  • High Performance and High Productivity Computing System (HP²CS),
  • Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS),
  • operating system-level virtualization or containerization,
  • Service-Oriented technologies (SOx) (Service-Oriented Computing (SOC), Service-Oriented Programming (SOP), Service-Oriented Architecture (SOA), and microService-Oriented Architecture (mSOA)),
  • Grid, Cloud, Edge, and Fog Computing (GCEFC),
  • Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS or SWaaS), and also Everything as a Service (EaaS),
  • orchestration, mesching, etc.,
  • Software-Defined technologies (SDx) (e.g. Software-Defined Networking (SDN), Software-Defined Wide Area Networking (SDWAN), Software-Defined Local Area Networking (SDLAN), and Software-Defined Mobile Networking (SDMN)),
  • Network Functions Virtualization (NFV),
  • and so on

    in greater depth and all details before since around the end of the year 2008.
    But over the past year we made up for this, because we got this suspicious feeling that something does not work here properly. And indeed, the longer we have been and still are engaged in all these areas, the more it becomes obvious that all the time it was, is, and will be only about our Ontologic System (OS) with its

  • Ontologic System Architecture (OSA),
  • Ontologic System Components (OSC), and
  • Ontologic Applications and Ontologic Services (OAOS), and also
  • Ontologic Net (ON),
  • Ontologic Web (OW), and
  • Ontologic uniVerse (OV).

    Honestly, this is too huge to just give it away in this situation and under these conditions and terms, that are even dictated by external entities, who have nor rights to do so at all.

    Our review of related matter showed once again (see the Clarification of the 18th and 21st of January 2020), that companies have

  • stolen our ArtWorks (AWs) and further Intellectual Properties (IPs), and even opened them and given them away for free to the whole public, and
  • conducted serious criminal acts

    already before we suggested the establishment of our SOPR in the year 2017, because they are depending on the creations of C.S. and the achievements of our corporation. In fact, they need our OS

  • not only to manufacture and sell their goods and provide and sell their services, but
  • even to manufacture and sell our goods and provide and sell our services.

    For example, social media and content crowdsourcing platforms and also streaming services are utilizing our OS with its elements listed above with its

  • Grid, Cloud, Edge, and Fog Computing (GCEFC),
  • Business Intelligence (BI), Visualization, and Analytics (BIVA),
  • Data Science and Analytics (DSA), and
  • High-Throughput Computing of the second generation (HTC 2.0) and Big Data Processing (BDP)

    in their data centers for

  • providing their services and
  • managing and operating, as well as orchestrating their data centers themselves.

    But the most ridiculous fact is that in a common, legal competition all their technologies (i.e. systems and platforms), goods, and services already have to be, or being precise, are our technologies, goods, and services, which have been stolen by perverting and breaking every fact and every written and unwritten law.

    In addition, we have not only individual frauds and conspiracies, but also local and worldwide respectively national and international acts of corruption and conspiracies.

    Everybody knows the rules. They will not change with us. So everybody has to respect the rules. Nobody is above the law and sometimes laws cannot be changed, when foundational principles must be upheld, such as for example democracy.
    The rule of law and the demand for providing benefit for the public cannot afford to legitimize

  • harming freedom of choice, innovation, and competition pro bono publico and
  • conducting serious criminal activities,

    which by the way would constitute the utter nonsense of harming a benefit for the public for providing a benefit for the public, for

  • upholding the legal order, and the basic principle of unity, justice, and freedom, and
  • establishing (public) peace (under (the) law) and harmony.

    The argument that we are harming freedom of choice, innovation, and competition pro bono publico is also void, because there is only one old Internet and only one old World Wide Web (WWW), and it was, still is, and will be good for the public to have only one of both. Furthermore, we created the successor of the Internet with our ON, and the successor of the WWW with our OW, and also something totally new with our OV, and it was, still is, and will be good for the public to have only one of all.
    Moreover, it is up to the competitors to create or only develop something that is original and unique, and does not copy our OS and our Os. This is called freedom of choice, innovation, and competition, and this is the level on which we are competing. Simply saying "We want that what you have created." is not convincing in relation said aspects and simply taking what we own and even giving our properties away for free to the public is not legal.

    Last but not least, we do have the

  • moral rights hence no need for any modification,
  • copyrights hence no need for any opening and licensing, and
  • property rights hence no need for any granting of use rights or use of property rights, such as giving away of signals, data, informations, knowledge

    in relation to our ArtWorks (AWs) Ontologic System (OS) and Ontoscope (Os) in whole or in part.
    So our creation, our work of art, our OS, and guess what, our rules, our exploitations, and our profits.

    We are also thinking once again about demanding the common triple damage compensations in addition to the share of up to 100% of the profit generated illegally, which obviously is a significant amount or even the whole profit generated by most leading ICT companies since years.

    Transition process
    We have always found the same actors that have gone beserk, specifically the leading companies of the ICT sector, that are also members of Free and Open Source Software (FOSS) foundations, and the engineering sector. But they are not so many to become a problem, which would be impossible to handle by the jurisdictions. We have addressed this to some extent by making the most renitent of them our takeovers.
    The utilization of Free and Open Source Software (FOSS) is only allowed in the facilities (e.g. data centers) and platforms (e.g. carrier cloud, telco cloud, IoT cloud, AutoCloud, MR cloud, AR cloud, VR cloud) of our SOPR until replacements are implemented and provided, if required at all. We have addressed this to some extent by making the most renitent of them our takeovers.

    Our Hightech Office Ontonics, which is also the parent of our managing and collecting societies to give them a legal basis, has made takeover offers that in general are 30% of the estimated enterprise value as of the 1st of January 2020 respectively the the estimated enterprise value as of the 1st of January 2015.
    We do not intend to change the management with the exception to take part in the important decision making. It is more a change of ownership and profit.
    If shareholders refuse to sell under our FRANDAC terms and conditions, then we will act accordingly and submit our takeover offers to the insolvency administrators or liquidators.
    We expect that big banks accommodate us loans for accelerating the takeover processes, if required at all, by taking the expected damage compensations, royalties being collected by our SOPR, and other capital sources as the basis for the decisions about the ranges of the credit lines.

    License Model (LM)
    We have no reason and hence intention to change the LM in accordance with the reproduction of our Os and the production of hardware and other goods based on our OS. 5% with all 7 discounts of the revenue for the reproduction of our Os and the production of goods with our OS and utiliizated in the legal scope of our AWs and IPs (OS with its ON, OW, and OV)) is considered to be under Fair, Reasonable, And Non-Discriminatory, As well as Customary (FRANDAC) terms and conditions.
    But there is absolutely no doubt that a share of 8 over 16 to 30% of the overall revenue is fully justified for a platform based on the reproduction of our OSA and OSC and the performance of our OAOS under FRANDAC terms and conditions.
    But we also own the foundational system and the related property rights.

    We concluded that we have no other choice to

  • counteract a simulation of an ordinary technological progress, a hidden Ponzi scheme, or other fraudulent activities,
  • exploit our AWs and IPs, and
  • restore the initial legal situation and our momentum as the creators and pioneers,

    than to make the LM either

  • more expensive, or
  • more restrictive and more diversified,

    as already announced and practiced in former issues.
    We also concluded once again that it is sufficient to fulfill the demands and requirements of the public, specifically freedom of choice, innovation, and competition, and also interoperability, comfort, safety and securtiy, as well as privacy pro bono publico==for the public good, by differentiating between public and private cloud in addition to Infrastructure as a service (IaaS) and Platform as a service (PaaS) (see also the issue #265 of the 9th of January 2020).

    But our LM is already too expensive in some cases and an expropriation is not possible in a legal way in total contrast to our exploitation of our AWs and IPs.
    Therefore, we are considering for example the following options:

  • Extension of the sources of licensing by asking a
    • fee for not naming C.S., our corporation, or our business units Ontologics, OntoLab, and Ontonics,
    • share of the profit generated with our AWs and IPs, and
    • share of an enterprise dependent on our AWs and IPs.
  • Reduction of the scope of modification respectively restriction of the scope of licensing of our AWs and IPs by
    • managing and operating our OS more like a proprietary or closed platform, and
    • provisioning tasks to contractors, suppliers, and providers,
    • makeing them more exclusive, such as network domains
      • carrier cloud, telco cloud with or without NFV,
      • SDx,
      • IaaS and PaaS public and private cloud,
      • etc.

      in whole or in part are part of the infrastructure of our SOPR. Entities are only allowed to manage and operate, and orchestrate own private systems or platforms of the fields of HTC 2.0 and GCEFC

      • in our facilities or
      • on their premises or
      • both,

      if they are not covered by an exclusion respectively are not part of the infrastructure of our SOPR. Competition does not take place on the system level but on the platform level if a platform is not covered by an exclusion respectively is not part of the infrastructure of our SOPR and the ON, OW, and OV platforms

      • Superstructure,
      • IDentity and Access Management System (IDAMS), Social and Societal System (SSS),
      • Electronic Commerce System (ECS), including Marketplace for Everything (MfE) platform,
      • Ontologic Financial System (OFinS),
      • etc.
    • {better wording and explanation required} giving licenses to a single entity, that is one of the 5 or 10 market leaders in more than 2 market sectors, like for example
      • online advertisement,
      • social media,
      • Electronic Commerce (EC) and online marketplace,
      • device manufacturing,
      • vehicle manufacturing,
      • drug manufacturing,
      • etc.,

      for only a limited amount of (1 or 2?) fields, that belong to our OS or are managed, operated, or orchestrated in the legal scope of our OS, that has been modified accordingly, like for example

      • High-Throughput Computing (HTC),
      • High Performance and High Productivity Computing System (HP²CS),
      • Distributed System (DS)
        • Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS),
        • GCEFC,
        • etc.,
      • SoftBionics (SB),
      • CPS, IoT, and NES, sensor web, Smart Urban System (SUS), smart city, connected home, car, whatsoever, IIoT, Industry 4.0 and 5.0, Medicine 4.0 and 5.0,
      • Intelligent Personal Assistant (IPA),
      • Mediated Reality (MedR), MR, AR, VR,
      • streaming content, gaming, and so on,
      • as a Service (aaS),
      • Autonomous System (AS) and Robotic System (RS),
      • Ontoscope,
      • Intelligent Raiment (IRaiment or IR), wearable computing, smart clothe,
      • etc..

    Further steps
    We are still working on the legal matter and preparing the set of agreements as well as the takeovers.

    This is not the grid, cloud, SOx, or whatsoever anymore. At least since the year 2007 it is a part of our ON, OW, and OV. But this grid, cloud, SOx, or whatsoever is not dead, because it never existed. It was always our OS with its ON, OW, and OV.

    State unions, governments, federal agencies, companies, and other entities have to understand and respect reality.

    The tight has turned.


    26.January.2020

    19:17 and 22:01 UTC+1
    Clarification

    *** Work in progress - some comments and epilog missing ***
    When discussing the matters about

  • BlackBoard (BB) systems (e.g. Tuple Space (TS), Linda like system, Space-Based Architecture (SPA)),
  • Service-Oriented technologies (SOx) (Service-Oriented Computing (SOC), Service-Oriented Programming (SOP), Service-Oriented Architecture (SOA), and microService-Oriented Architecture (mSOA)),
  • os-level virtualization or containerization,
  • High-Throughput Computing (HTC),
  • High Performance and High Productivity Computing System (HP²CS),
  • Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS),
  • Grid, Cloud, Edge, and Fog Computing (GCEFC),
  • orchestration,
  • Software-Defined technologies (SDx) (e.g. Software-Defined Networking (SDN), Software-Defined Wide Area Networking (SDWAN), Software-Defined Local Area Networking (SDLAN), and Software-Defined Mobile Networking (SDMN)),
  • Network Functions Virtualization (NFV),
  • and so on
  • in the clarifications of the February and March 2019, and also 18th and 21st of January 2020, we ultimately came again and again to the questions
  • what microservices truly are and
  • what the prior art is truly Teaching, Suggesting, and Motivating (TSM).

    Indeed, we had difficulties at first to separate the paradigms, models, or architectures of SOx.

  • One reason was that we thought SOC 1.0 and SOP 1.0 are diffierent paradigms, because we have three groups in relation to SOP 1.0:
    • SOP 1.0A Sun with Java Jini and Motorola with Openwings,
    • SOP 1.0B others with the Automated Information Router (AIR), and
    • SOP 1.0C General Electric Company (GE), National Institute of Standards and Technology (NIST), at el. with Service-ORiented Computing EnviRonment (SORCER) based on Jini and JavaSpaces and also Rio.
  • We remembered that the person behind Openwings stated that SOP 1.0A was originally designed for Inter-Process Communication (IPC), but we have not concluded at all that there might be the possibility that our Evoos has been copied in whole or in part before. But a review of the matter related to SOx, specifically with Jini and Openwings as well as AIR and SORCER, showed a striking match between features of our Evoos and SOP as part of SOC, so that there is significant evidence that SOP and mSOA are based on our Evoos
  • In addition, we have a zoo of definitions for SOA and further developments of these paradigms, models, or architectures, specifically by integrating Autonomic Computing (AC) and Semantic (World Wide) Web (SWWW) standards and technologies.

    Inter-Process Communication (IPC)
    From an online encyclopedia about the field of Inter-Process Communication (IPC): "In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests.[1] Many applications are both clients and servers, as commonly seen in distributed computing.
    IPC is very important to the design process for microkernels and nanokernels, which reduce the number of functionalities provided by the kernel. Those functionalities are then obtained by communicating with servers via IPC, leading to a large increase in communication when compared to a regular monolithic kernel. IPC interfaces generally encompass variable analytic framework structures. These processes ensure compatibility between the multi-vector protocols upon which IPC models rely.[2 [Inter-process communications for system-level design. 81993]]]
    An IPC mechanism is either synchronous or asynchronous. Synchronization primitives may be use to have synchronous behavior with an asynchronous IPC mechanism."

    Comment
    In Evoos we do not do client / server all the time.

    SOC of the first generation (SOC 1.0)
    Jini

    From an online encyclopedia about Jini: "Jini [...] is a network architecture for the construction of distributed systems in the form of modular co-operating services. JavaSpaces is a part of the Jini.
    [...]
    Jini provides the infrastructure for the Service-object-oriented architecture (SOOA).

    Using a service
    Locating services is done through a lookup service.[5] [...] Clients can use the lookup service to retrieve a proxy object to the service; calls to the proxy translate the call to a service request, performs this request on the service, and returns the result to the client. This strategy is more convenient than Java remote method invocation [(RMI)], which requires the client to know the location of the remote service in advance.

    Limitations
    Jini uses a lookup service to broker communication between the client and service. This appears to be a centralized model (though the communication between client and service can be seen as decentralized) that does not scale well to very large systems. However, the lookup service can be horizontally scaled by running multiple instances that listen to the same multicast group.[citation needed]"

    From an online encyclopedia about Jini (translated German version): "Jini [...] is a framework for programming distributed applications, which have special requirements regarding the scalability and the complexity of the cooperation between the different components and cannot be served by existing techniques. Jini provides a flexible infrastructure through which services in a network can be provided. Jini was developed by Sun Microsystems based on the Java programming language.
    [...]
    The Jini network technology is an open architecture, which allows developers to program adaptive network-based services - implemented in hardware or software. With Jini, scalable and flexible networks can be created as needed in a dynamic computing environment.

    The eight fallacies of distributed applications

  • The network is always available
  • The latency is zero
  • The transmission rate is infinite
  • The network is secure
  • The structure of the network does not change
  • There is only one administrator
  • There are no transport costs
  • The network is homogeneous

    These assumptions about the network hinder the effective speed and distribution of the software. The following features of the Jini network technology help to overcome these pitfalls.

  • Code mobility - The programming model of the Java programming language is transferred to the network. It is possible that data and programs are transferred over the network as Java objects.
  • Protocol-independent - allows a high flexibility in the design of the programs.
  • Leasing - enables self-healing and automatic configuration of the network, which increases fault tolerance, for example.
  • Flexibility - the network adapts to changes in the computer environment.
  • Integration - enables easy and fast collaboration of old, current, and future network components[.]
  • Licensing - the Jini network technology is available for free.

    Jini Architecture
    The Jini architecture specifies how clients and services can find each other in a network and work together [or collaborate] to solve given tasks [(see lookup-service)]. The service providers enable the clients to access the services via Java-based objects. The network communication can be done using various techniques such as RMI, CORBA or SOAP, since the client only sees the Java objects offered by the service. The actual network communication is hidden by the Java objects of the service."

    Comment
    The German version seems to be manipulated a little, because

  • Java, including Jini, was business process and Web Services (WS) at that time as is the case with SOA 1.0 and SOP 1.0, while other properties came later,
  • self-healing is considered by the authors of Openwings (see below) to be a part of the Openwings Context pattern and not a part of the Jini Lease pattern, and
  • other features are related to our Evoos and OS, like for example the inclusion of hardware as well as the adaptive property.

    We quoted an online encyclopedia about the SOC 1.0 framework and network technologiy Jini, because in the quoted excerpt the term SOC leads to the description of SOA, which is not correct, because Jini is P2P and SOP.
    We consider Jini as a or even the representation for Service-Oriented Computing of the first generation (SOC 1.0).
    We already quoted from the online encyclopedia about blackboard pattern, tuple space, and Space-Based Architecture (SBA) in the Clarification of the 18th of January 2020.

    SOP of the first generation Variant A (SOP 1.0A)
    Jini and Openwings

    We quote the first publication about Openwings of the company Motorola done in 2000 (see also the documented titled "Introduction to Service-Oriented Programming (Rev 2.1)" 2001): "Service-Oriented Programming
    The Service-Oriented Programming (SOP) model is the most exciting revolution in programming since Object Oriented Programming. Sun's Jini and Java technologies are key enablers for this new paradigm. Motorola has been refining the Service-Oriented Programming (SOP) model with its Openwings architecture. Together these technologies enable a new generation of service-oriented computing applications. generation of service-oriented computing applications.
    Some of the patterns required for service-oriented computing currently are only supported in the Java programming language: [...].
    [...]
    To understand Service-Oriented Programming, one needs to understand some of the paradigms that preceded it. These include: OOP, Client / Server, and Component Models.
    Object Oriented Programming (OOP) is built on the premise that programming problems can be modeled in terms of the objects they represent. Object Oriented Programming has specific characteristics: inheritance, encapsulation, and polymorphism. Service-Oriented Programming builds on OOP, adding the premise that programming problems can be modeled in terms of the services that a component provides and uses.
    Component models prescribe that programming problems can be seen as independently deployable black box client / servers which communicate through contracts. The client / server model has become brittle. Service-Oriented Computing (SOC 1.0) contains components that publish and use services in a Peer-to-Peer (P2P) manner. In SOP a client is not tied to a particular server. Service providers are all treated equivalently.
    [...]

    Java Patterns [1]
    [...]
    Pattern Name: Contracts
    [...]
    Solution: The concept of an interface construct was added to Java to describe a behavior both in syntax and semantics. The methods, method types, method parameter types, and field types prescribe the interface syntax. The comments, method names, and field names describe the semantics of the interface. In this model any object can implement the interface and interfaces can use inheritance, including multiple inheritance.
    [...]

    Jini Pattterns [2]
    [...] Jini tackles the issues of distributed computing head on, unlike many distributed computing paradigms before it. For example, distributed computing is fraught with problems, such as partial failures, locality of execution, and interface mismatch. [...]
    [...]
    Pattern Name: Lease
    [...]
    Solution: Both sides agree to lease a resource for a given period of time. Since, lease expiration can be detected by both sides, regardless of host or network failures, it guarantees that a partial failure will be detected correctly by both parties.
    Pattern Name: Discovery
    [...] Solution: A bootstrapping protocol is used to automatically find a lookup service. From there everything else can be found. As long as the bootstrapping technique remains the same software can participate in a plug and operate (PLOP) environment.
    Pattern Name: Lookup
    [...] Solution: Allows publication and lookup of services based on their contracts and attributes. Unlike stovepipe client / servers, service interfaces are published and are usable by any other component.
    [...]
    Pattern Name: Distributed Transaction
    [...] Solution: Transactions have been used for a long time in databases. However, they are also useful in distributed computing. Jini provides a set of contracts for describing distributed transactions.
    Pattern name: Coordinator
    [...]
    Solution: Jini provides a concept called spaces, which is really a combination of a synchronization construct and an object database. The space pattern is based on a prior work called Linda, which was used for parallel computing. The concept is to allow objects to be transactionally written, taken, and read from a shared space. This concept is based on mobile code.

    Openwings Pattern [3]
    Jini and Java go a long way to support Service-Oriented Programming (SOP), however, several elements are missing that would allow development of full-scale service-oriented systems. Openwings is focused on filling these holes, to provide the full set of elements and aspects intrinsic to SOP. The following patterns are described in this section: component, connector, container, context, installer, policy, and proxy.
    [...]
    Pattern name: Component
    [...]
    Solution: A component encapsulates a unit of deployment of hardware (through software) or software. Components are the basic unit of deployment for services. Services provided and used by components are contractually specified. Components utilize a peer to peer model (instead of client server). Components are subject to third party composition and are independent of deployment contexts. Components must be independent of platforms, transport protocols, and deployment environment details (such as network topology). Components can be used as the basic unit for mobile agents.
    Pattern name: Connector
    [...]
    Solution: Connectors provide an abstraction for transport independence. Connectors are grossly divided into two categories: synchronous and asynchronous connectors. Connectors are composed of two proxies: a user proxy and a provider proxy. One proxy provides an object that implements a contract and the other takes an object that implements a contract. Connectors can naturally be chained. They also provide an insertion point for handling transport security and quality of service. Connectors can be acquired in one of three ways: they can be bundled with a service provider, looked up in a repository, or generated on the fly.
    Pattern name: Container
    [...]
    Solution: The most basic container is a service itself: a processing service. The container enforces code security, by setting the Java Security Manager. It also provides a concept missing from Java, the ability to map multiple processes to a single JVM. The container pattern can manage pools of processing resources or JVMs and make load-balancing decisions. Containers can work together to form clusters, which guarantee clustered services are kept running. This feature can be used for cold and warm failover of services. Finally, the container model provides a simple environment to support mobile agents.
    [...]
    Pattern Name: Context
    [...] Solution: A context provides an environment for self-forming and self-healing systems. Core to the context pattern is removing environment specific details from components. By doing this components become truly reusable and deployable in different contexts. Policies are another pattern that can be used to do this (see policy pattern). A context enforces a system boundary, provides for automated installation of components (see installer patter), provides all of the core services for system formation, and prescribes how services are published and discovered beyond the workgroup.
    [...]
    Pattern name: Proxy
    [...]
    Solution: Proxies can provide an object that implements a contract or take an object that implements a contract. Proxies are the primitives used to create connectors and smart proxies. Smart proxies allow users to add additional layers of functionality behind an interface.

    Conclusion
    Service-Oriented Programming (SOP) is a new paradigm for computer science that requires a different way of thinking of distributed problems. Though the model was originally designed for inter-process communication [(IPC)], it holds true for intra-process communication, i.e. between threads. The reason that threaded and distributed computing is currently fraught with errors, is largely due to the fact that contracts are not clearly defined. It is particularly bad in threading models, where calls are being made directly into the implementations of objects running in different threads.
    Java, Jini, and Openwings are providing the first fully functional framework for SOP. In describing the patterns for SOP it should have become clear that some of the patterns can only be supported in a Java programming environment at this time (namely code mobility and code security). Until these capabilities are added to other languages / paradigms it will be very difficult to implement Service-Oriented Programming in other languages."

    For better understanding and completness we also quote the revisioned document about Motorola's introduction of SOP 1.0A: "Introduction to Service-Oriented Programming (Rev 2.1)
    [...]
    The inception of the Service-Oriented Programming (SOP) paradigm is being defined throughout the industry including: Sun's Jini, [Motorola's] Openwings, Microsoft's. NET, and HP's CoolTown.
    [...]
    With the advent of Internet technology [...] but the Internet is still very stovepiped; connecting these services together to do more powerful things is very difficult. [...] When it becomes possible to utilize services to create new, more powerful constructs, the power of networking will be fully exploited.
    [...]
    The traditional client-server model often lacks well-defined public contracts that are independent of the client or server implementation. This has made the client-server model brittle.
    [...]

    Service-Oriented Technology
    The software industry has been putting out strong messages that the future of distributed computing is service-oriented. These messages are coming from many of the industry big hitters: Microsoft, Hewlett Packard, Sun Microsystems, and Motorola. The message may be difficult to see, because each company has conceptualized the service-oriented model in their own technology initiatives: Microsoft .NET, Hewlett Packard Cooltown, Sun Microsystems Java / Jini, and Openwings.
    Service-Oriented Programming is a paradigm for distributed computing that supplements Object Oriented Programming. Whereas OOP focuses on what things are and how they are constructed, SOP focuses on what things can do.

    Microsoft .NET
    The Microsoft .NET [1, 2] effort provides an Internet Operating System, bridging applications from the traditional desktop to the Internet. Microsoft has recognized that ubiquitous network connectivity has not been fully exploited. The vision is that future applications will be built not only by integration of local services, but integration of services across the Internet. Microsoft sees this effort as a way to decrease time-to-market, to achieve higher developer productivity, and to improve quality.
    Microsoft is focusing on language independence, as opposed to platform independence. This is a similar approach to that taken by the Common Object Request Broker Architecture (CORBA). .NET ignores object model issues, instead focussing on messaging. This could be interpreted as a direct attack on the Java model. The following table gives a brief summary of the core components of the Microsoft .NET strategy.
    Some parts of the .NET framework are well defined, but others are still immature. Recently, Microsoft demonstrated some of the technology behind the marketing at the Fall 2000 COMDEX trade show. The demonstration focused on the .NET Development Infrastructure. Microsoft is pushing several new technologies to enable .NET: Service Contract Language (SCL), Simple Object Access Protocol (SOAP), Disco, C#, and the Common Language Runtime (CLR).
    Service Contract Language (SCL) is a language for defining language-independent message interfaces. This is very similar to CORBA Interface Definition Language (IDL) and DCOM Microsoft Interface Definition Language (MIDL). The important concept to recognize here is a standard for defining service interfaces.
    [...]
    Disco is Microsoft's upcoming strategy for service discovery. This is yet another spin on Microsoft's plug-and-play technologies. The core concept to identify here is service discovery.
    [...]
    The Common Language Runtime (CLR) is an attempt to bring the service paradigm to Dynamic Link Libraries (DLLs). The concept is to define language-independent interfaces to DLLs that include the object code, interface definitions, and a description of the interface. The key element to notice here again is the concept of a contract.
    In demonstrating their new Visual Studio .NET at COMDEX, Microsoft showed Web Services that had not been combined before being put together to make applications. The ability to put services together in ways not envisioned by their authors is called conjunction.
    Microsoft is also working on servers to host these web services as shown in the following figure:
    Figure 2. .NET Enterprise Servers
    The following tables summarize the SOP elements and characteristics demonstrated by Microsoft .NET.
    Element [] .NET
    Contract [] Service Contract Language (SCL)
    Component [] Web Service Providers
    Container [] .NET servers
    [...]

    Hewlett Packard Cooltown
    Hewlett Packard has elevated the user experience in serviceoriented computing to the forefront through a technology called Cooltown [3]. Cooltown is built on web technologies such as Hypertext Transfer Protocol (HTTP). Cooltown promotes the idea of bridging the physical world and digital world (the web). The goal is to give people, places and things (objects) a digital presence on the web that people can then interact with. The Internet contains content relating to the objects around us, but the content is not directly linked to the objects themselves. Cooltown attempts to enrich interaction with the physical world by providing a digital presence that allows information and control to flow naturally around us. Much of this technology focuses on the concept of discovery by location: being able to discover and interact with the objects around you. [...] Cooltown envisions every object being represented by a web page. Pads are handheld devices that support web browsers and object detection. Pads can detect objects by reading bar codes, RF / IR tags, or a beacon (i.e. a Universal Resource Locator (URL) broadcaster). All of these technologies are simply used to deliver a URL. Other supporting technologies are Global Positioning Systems (GPS) and Bluetooth [4]. [...]
    The figure below shows some of the elements of HP Cooltown: beacons, tags, portals, pads, and places.
    A place corresponds to a location in the real world. A tag is provided by an object as a reference to a URL. A beacon delivers an object's URL. A pad is any device that can display a web browser and can sense beacons or tags. A portal provides a connection from a pad to one or more web servers. Examples of portals could be wireless Internet access such as 802.11. [...]
    The one core underlying assumption of HP Cooltown, is a web-based interface. Cooltown does not address programmatic interfaces directly, focusing instead on the user interfaces provided by web pages. This approach inhibits the conjunctive and interoperable aspects of SOP. However, it does demonstrate some of the characteristics of SOP.
    The following tables summarize the SOP elements and characteristics demonstrated by HP Cooltown.
    Element [] Cooltown
    Component[s] [] People, Place, Thing
    Container [] Web Server
    [...]

    Sun Jini Technology
    Sun's Jini Network Technology [5] is a framework for building systems spontaneously. Jini technology makes it possible to build a system out of a network of services. Services can be added or removed from the network, and new clients can find existing services. This all occurs dynamically, with no administration.
    Services are based on well-known interfaces written in the Java programming language. Whether a service is implemented in hardware or software is not a concern. The service object downloaded to a user is supplied by the component providing the service. The client only knows that it is dealing with an implementation of an interface written in the Java programming language. A design based on service interfaces makes it possible to build systems with higher availability. A component can use any service that complies with the interface, instead of being statically configured to communicate with a certain component. The following tables summarize the SOP elements and characteristics demonstrated by Sun Jini[.]
    Element [] Jini
    Contracts [] Service Interfaces
    [...]

    [Motorola] Openwings
    Openwings [6] is a service-oriented architectural framework for building systems and systems of systems. Although not tied specifically to Jini, it builds upon Java and Jini concepts to provide a more complete solution. The figure below shows a high level diagram of the Openwings architecture.
    Several of the core services provide aspects of serviceoriented computing, as described below:
    [...]
    The following table summarizes the SOP elements demonstrated by Openwings[.]
    Element [] Openwings
    Contracts [] Service Interfaces
    Component[s] [] Component Services
    Connector[s] [] Connector Services
    Container [] Container Service
    Context [] Context Services
    [...]

    Summary of Service-Oriented Technologies
    All of the initiatives discussed so far have key focus areas and key technologies they depend on, as seen in the following table.
    Initiative [] Focus [] Dependencies
    .NET [] Language Independent Services [] SOAP, IP, SCL, XML, Disco, WINTEL, HTTP
    Cooltown [] User / Service Interaction [] HTTP, IP
    Jini [] Platform Independent Service Discovery [] Java, HTTP, IP
    Openwings Service-Oriented Programming [] Java, HTTP, IP

    Elements of SOP
    The analysis of several Service-Oriented technologies has yielded a set of common architectural elements that make up Service-Oriented Programming:

  • Contract - An interface that contractually defines the syntax and semantics of a single behavior.
  • Component - A third-party deployable computing element that is reusable due to independence from platforms, protocols, and deployment environments.
  • Connector - An encapsulation of transport-specific details for a specified contract. It is an individually deployable element.
  • Container - An environment for executing components that manages availability and code security.
  • Context - An environment for deploying plug and play components, that prescribes the details of installation, security, discovery, and lookup.

    [...]

    Architecture Description Language
    Just as Object Oriented Programming has a modeling language, namely Unified Modeling Language (UML), Service-Oriented Programming needs a modeling language as well. Architecture Description Language (ADL) [7] [...] is a modeling language that provides notation for most of these architectural elements of SOP. ADL contains notation for Components, Connectors, Roles, and Ports. A proposed modification to ADL for SOP adds notation for containers and contexts, which could be viewed as specialized components.

    Patterns for Service-Oriented Programming
    In the following pages we present design patterns that apply to Service-Oriented Programming derived from Java, Jini, and Openwings. The patterns from Microsoft .NET and HP Cooltown are covered by these patterns. In the interest of brevity, mini-patterns are used throughout this section as follows: name, problem, context, and solution.
    [...]

    Service-Oriented Example
    [...]
    One important thing to note here is that since discovery is being used, the system is self-forming and self-healing. The system is self-forming because new components can be added dynamically and used in the system without changing the existing components. For instance, if another player is added to the system, it simply appears as another option for audio output. If a player is removed from the system, it is no longer shown as an option for audio output. In a servicebased system, it doesn't matter if the device provides a single service or all-in-one capability, since each service is a separate entity.

    Evolving the Problem
    Competitors will inevitably develop competing standards for audio services on the web. The strategy to overcome this is to publish an adapter service that complies with the competitor's service interface definition and translates it to the vendor's definition.
    One way the system might evolve is to extend audio services to the car. Motorola started with car radios and has come back to them, with a twist, in the iRadio [12]. What if playlists of music and information from the home could follow users wherever they go, including the car? If a car can be detected over Bluetooth or HomeRF [13], the audio can be downloaded to the car stereo. The car in this instance would contain an audio store and audio player.
    As wireless connectivity increases, even children's toys can participate in service discovery. For instance, Motorola's new cable modems have HomeRF wireless access built in. In fact, Sally's doll can play her favorite songs or allow Mom to call her down to dinner. Again, this is simply another audio recorder and player.
    This capability could be extended to the daily workout, using a wireless audio player that can store audio. When the device is in the house it is discovered and audio can be sent to it. This could even be the audio recorded from a favorite nighttime comedy. The cable box could become an audio source. More instances of these same components are used to achieve this.
    This same player could be used at house parties or nightclubs. Everyone brings their wireless audio devices containing their favorite songs. The house playlist is generated from this collection of music, creating a more interactive experience.

    Benefits of the SOP Approach
    The technologies to achieve this vendor's evolutionary goals actually exist today. The advantages to the developer are that their existing work was never broken, and extending the system is easy. For users, SOP allows them to connect the services in new ways that add capabilities and value. The ability to redirect audio anywhere in the house, to the car, or to personal devices utilizing wireless technology provides tremendous new value. Instead of running wires between stereo components, the user draws lines between components to achieve the desired configuration. A traditional stovepipe client-server system would have never allowed the expansion or flexibility the SOP solution provides.

    Conclusion
    Service-Oriented Programming (SOP) is a new paradigm for computer science that requires a different way of thinking of distributed problems. Though the model was originally designed for inter-process communication, it holds true for intra-process communication, i.e. communication between objects contained in different threads within the same program. [...]
    The service-oriented approach and service-oriented frameworks such as Openwings provide many benefits for developers and system integrators. Building software components is simplified by the enforcement of good objectoriented design principles. Component design is driven by the interfaces of the services provided. This in turn simplifies integration of systems. The prototyping of components is also simplified. A fully service-oriented component framework such as Openwings Component Services makes it possible to build truly reusable, non-trivial software components.
    The benefits of simplified development and integration are passed on to users. Systems that are designed with redundant services will be highly available, satisfying expectations for systems that always work. Systems that are designed with security at all levels, especially at the service level, will meet user's expectations of systems that protect their secure data. Systems that are designed based on well-known interfaces will be true plug-and-play, satisfying user demand for zero-administration systems.
    Because service interfaces are clearly separated from user interfaces, the same service can be accessible to a wide variety of users who access the service in a variety of ways. Users with increasing levels of expertise can take advantage of more features of service interfaces. Users around the world can use an interface that works in their language. Users will be able to access services with all kinds of different devices. [...]"

    Comment
    SOP 1.0 is enabled by Jini and Openwings, but also our Evoos (component, connector, container, context, etc.).
    Evoos comprises characteristic elements added to Jini by Openwings, specifically (software) patterns, components, naturally chained synapses and interneurons and hence connectors, os-level virtualization or containerization and hence containers, and for sure context with self-forming and self-healing.
    Evoos also comprises characteristic elements related to microservices and also Associative Memory (AM) (e.g. BlackBoard System (BBS) (e.g. Tuple Space System (TSS))).
    Sun Microsystem Java Jini is Object-Oriented (OO 1) and Service Object-Oriented (SOO), but is entirely dependent on the Java platform and the Internet Protocol (IP) suite (see also Service-Oriented Computing of the first generation (SOC 1.0) (e.g. SOP 1.0C (e.g. Service ORiented Computing EnviRonment (SORCER)))).
    Motorola Openwings (late 2000) came after C.S.' Evoos (late 1999). The statement that SOP was originally designed for IPC is not conclusive but suspicious, because it requires a component-based os featuring os-level virtualization or containerization, communications based on P2P computing, and other properties, which is not conclusive for a common (microkernel-based) os, but included in our Evoos. In addition, IPC is based on the Client-Server (CS) computing model but Openwings clearly says Jini and Openwings are not, but the Peer-to-Peer (P2P) computing model, while both and conceptually all other computing and networking paradigms are included in our Evoos as well. Bingo!!! Ooops, a fatal contradiction.
    The .NET CLR of Microsoft is realizing late binding, which is also included in Evoos showing once again where SOP truly originated.
    The Cooltown of Hewlett Packard is not our Calibre/Caliber and Ontoverse, which is bidirectional that gives digital objects a real presence and people, places and things (objects) in the universe that people and machines can then interact with, and even not a mirror world.

    SOP of the first generation Variant B (SOP 1.0B)
    Automated Information Router (AIR)

    We quote a second publication about SOP and publicated in 2002: "Service Oriented Programming: a New Paradigm of Software Reuse
    [...]
    An automated system could easily handle a wider set of connections. This system has to know not only service specific information, but also further information related to the territory to relate services with each other in order to automate decisions that usually are made by human beings.
    [...]
    Service Oriented Programming is a natural evolution of the traditional component based software development. This new technique is mainly web oriented but it allows exploitation of legacy systems that are not designed to be deployed through the Internet. In this way, a developer doesn't need to rewrite any piece of code to deploy data on the Internet or integrate information from this kind of systems with other sources. Only a translation of data is needed to establish a connection to other data sources [30 [WebEntree: A Web Service Aggregator [... 1998]].
    This paper analyzes the integration issues of localized services using a GIS and its implementation through an integration architecture named AIR (Automated Information Router).
    [...]

    2 Integration Oriented Programming
    2.1 Component-based programming
    In the last 35 years software development has moved from extremely large and monolithic code to component base development [8] [22]. This transition creates problems related to components communication and compatibility. Many standards were developed like COM [6 [COM (Component Object Model) - specifications]], CORBA [7 [CORBA (Common Object Request Broker Architecture)]] and EJB [15 [Enterprise Java Beans Specification 1.1 [1999]]].
    Usually, these components are simple and encapsulate very specific feature like statistic, graphics or e-mail functions. Once a developer chooses a component standard he can use only ones that are selected specification compliant and none of the others.
    [...]
    2.2 Package oriented programming
    A way to develop new applications through integration is the package oriented programming (POP) [21 [Package-Oriented Programming and Engineering Tools [2000]]].
    [...]
    2.3 Service oriented programming
    The Internet provides a large number of simple services that could be integrated to produce new and more complex ones [10]. Service integration is becoming a necessity due to specific services that are available but they are not user friendly if it is required to perform a complex task.
    Web services could be considered components due to three features:
    1. They are developed and deployed independently.
    2. Encapsulate functionality and hide implementation details.
    3. Expose interfaces.
    The last point is of particular interest in the integration community. New standards, like WIDL (Web Interface Definition Language) [28] and WSDL (Web Service Description Language) [29], are emerging and play similar roles to the IDL (Interface Definition Language) in CORBA and other component technologies.
    In this way component base programming could be applied to these non conventional components to develop integrated web services.
    Moreover service integration solve some problems of component base programming and package oriented programming:

  • It is possible and quite easy to integrate components based on different technologies using a specific adapter that converts requests from one communication protocol to another one.
  • The platform used to develop and deploy services does not matter. The best platform for every single application can be chosen.

    [...]
    To provide a useful integrated service, an automated system would handle nearly the whole knowledge related to the problem it has to solve. This knowledge is rarely available from a single source, but it could be available integrating many sources. This integration allows a developer to provide a new system reusing already developed and deployed services.
    Service integration allow reusing not only software components but also their deployment. This is a new paradigm of software reuse that benefits by already running applications avoiding all the effort needed to build and setup the environment required by the specific software component. Reusing service software already deployed produce two main benefits:
    1. It is possible to exploit components that run inside incompatible environments (operating systems, software libraries, etc.)
    2. No time and effort is required to developers to setup the working environment: it is already configured and working on a remote machine waiting for a request
    These benefits allow the construction of a system without any worry about the compatibility of the execution environment, required by components, and the effort spent to build it up allowing developers to focus on the problem they have to solve.

    3 Principles of Service Oriented Programming
    3.1 The integration architecture
    AIR (Figure 1) is both a client and a server: it is a client of elementary services, and a server of the complex integrated services it implements. The architecture comprises several modules, each handling a different protocol (including HTTP [13], SOAP [20], XML-RPC [26], and RMI [17]).
    The architecture comprises three fundamental parts: the integration networks, the builder, and the controller.
    An integration network is the abstraction of an integration. It is a set of nodes and arcs (Figure 2) building a data flow diagram that routes information. Nodes perform elementary functions like access to local resources (files and database), access to external services, and transformations. Arcs connect nodes and describe the path followed by data that are coded as XML documents.
    The builder initializes the system reading the configuration file and creating the integration networks in memory: it creates nodes characterized by specific parameters and connects them. After this initialization it activates the controller that manages the system at run-time.
    The controller is a collection of specific protocol servers like RMI, HTTP/HTML, SOAP, etc. It is the integration server interface that manage translations from protocol specific request to a common XML format used inside integration networks.
    Inside the integration network, the information processing is managed through data-flow [14]. To simplify nodes and connections, the only data type used is XML document.
    This restriction does not affect interoperability of AIR because outside the network there is the controller that handles connections with clients using different protocols and performing data format conversions through a simple syntactic translation. [...]
    [...]
    The integration is focused on web services, as this technology has developed standards that allow interoperability also between services originally not conceived for integration.
    Similar services may require queries expressed in different terms, which leads to syntactic or semantic incompatibility. A syntactic incompatibility is, e.g., the difference in names of query parameters. A semantic incompatibility is, e.g., when two or more terms - that are no synonyms - addresses the same thing or partially equivalent things.
    [...] AIR supports mappings and transformation templates to overcome such incompatibilities. Actually there is no way to generate mapping data automatically, these data are generated manually and stored into a database. Before AIR accesses a service, it queries this database to translate all terms in the request to adapt it to the specific service.
    Finally, AIR handles anomalies in the elementary services, such as network or application failures. If not handled, a problem on one service could impair the whole integration. Since in most cases the services are complementary and do not depend on each other, a failure on one of the services can be handled by omitting results for that single service. In these cases the overall results are partial but the integration still works.
    To construct a complex service, often many levels of integration are required. Basic services could be integrated to provide advanced services that are characterized by more complex properties. These properties could be both functional and non functional (e.g. reliability and performance). Moreover advanced services can be integrated to provide a web application or a further level of advanced service (Figure 3 [Layered service integration]).

    3.2 Problems in service integration
    There are several problems when we try to integrate different services. Some of them are issues about services compatibility, others are about services non functional abilities.
    Compatibility issue includes three different interrelated problems (Figure 4):
    1. Protocol: the communication protocol - e.g., HTTP, FTP, ...
    2. Syntax: the language used to structure the information - e.g., XML [4].
    3. Semantic: the meaning of the terms used in the language [19 [Unifying Heterogeneous Information Models [1998]]] [16 [Enhancing online catalog searches with an electronic referencer [... 2000]]]
    [...]
    Problems related to syntax are harder, especially if a language used to code information is not machine oriented but human oriented. This is the problem of the HTML language: it is presentation oriented not content oriented. In this way an automated translator that transforms an HTML document in a more structured content oriented one, like XML, is not easy to code. Moreover, once coded it requires constant fixes due to style changes inside HTML documents. To perform these kind of data extraction XPath [27] is a useful language to make queries inside structured documents.
    There are another set of problems related to service integration: non functional qualities. In service integration service level agreements (SLAs) [23] become of primary importance. Specific SLAs for basic services could allow to predict the service level of the integration. The most important non functional qualities are: response time and reliability.

    5 Conclusion and Future Work
    The paper has proposed a way to integrate services that are available over the World Wide Web providing a new kind of customized services through an integration architecture called AIR. This integration is made using data-flow paradigm to route information and adapters that translate service specific data to a common representation using XML.
    The web is currently designed for human oriented interfaces (HTML) without a language coded semantic interpretation that makes data hard to process by an automated system. The transition to languages that code semantic will allow the develop of more complex integrated services that are high value to users.
    Public registers of services are becoming reality as languages to describe their functional and non functional qualities and their interfaces. In this way it is possible to build an integration server that not only integrates developer chosen services but it will choose the best basic services at run-time when the user request arrives, adapting its queries to interface and data translations automatically to the chosen one."

    Comment
    The document is only about business processes, Web Services (WS), and Enterprise Application Integration (EAI). But the underlying systems, such as for example the operating system (os), the Virtual Machine (VM), and the Runtime Environment (RE), the DataBase Management System (DBMS), or the Internet and the World Wide Web (WWW), are not of concern.
    The upcoming Sematic (World Wide) Web (SWWW) is not mentioned in this document at all, though the suggestion to utilize Domain-Specific Languages (DSLs) given in the XML format as some kind of languages that code semantic cannot be rejected.
    Note the differences between the variants SOP 1.0A and SOP 1.0B.

    SOP
    From an online encyclopedia about Service-Oriented Programming (SOP) publicated on the 19th of December 2007: "Service-oriented programming (SOP) is a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Services can represent steps of business processes and thus one of the main applications of this paradigm is the cost-effective delivery of standalone or composite business applications that can "integrate from the inside-out".
    SOP inherently promotes service-oriented architecture (SOA), however, it is not the same as SOA. While SOA focuses on communication between systems using "services", SOP provides a new technique to build agile application modules using in-memory services as the unit of work.
    An in-memory service in SOP can be transparently externalized as a web service operation. Due to language and platform independent Web Service standards, SOP embraces all existing programming paradigms, languages and platforms. In SOP, the design of the programs pivot around the semantics of service calls, logical routing and data flow description across well-defined service interfaces. All SOP program modules are encapsulated as services and a service can be composed of other nested services in a hierarchical manner with virtually limitless depth to this service stack hierarchy. A composite service can also contain programming constructs some of which are specific and unique to SOP. A service can be an externalized component from another system accessed either through using web service standards or any proprietary API through an in-memory plug-in mechanism.
    While SOP supports the basic programming constructs for sequencing, selection and iteration, it is differentiated with a slew of new programming constructs that provide built-in native ability geared towards data list manipulation, data integration, automated multithreading of service modules, declarative context management and synchronization of services. SOP design enables programmers to semantically synchronize the execution of services in order to guarantee that it is correct, or to declare a service module as a transaction boundary with automated commit/rollback behavior.
    Semantic design tools and runtime automation platforms can be built to support the fundamental concepts of SOP. For example, a service virtual machine (SVM) that automatically creates service objects as units of work and manages their context can be designed to run based on the SOP program metadata stored in XML and created by a design-time automation tool. In SOA terms, the SVM is both a service producer and a service consumer.

    Fundamental concepts
    SOP concepts provide a robust base for a semantic approach to programming integration and application logic. There are three significant benefits to this approach:

  • Semantically, it can raise the level of abstraction for creating composite business applications and thus significantly increase responsiveness to change (i.e. business agility)
  • Gives rise to the unification of integration and software component development techniques under a single concept and thus significantly reduces the complexity of integration. This unified approach enables "inside-out integration" without the need to replicate data, therefore, significantly reducing the cost and complexity of the overall solution
  • Automate multi-threading and virtualization of applications at the granular (unit-of-work) level.

    The following are some of the key concepts of SOP:
    Encapsulation
    In SOP, in-memory software modules are strictly encapsulated through well-defined service interfaces that can be externalized on-demand as web service operations. This minimal unit of encapsulation maximizes the opportunities for reusability within other in-memory service modules as well as across existing and legacy software assets. By using service interfaces for information hiding, SOP extends the service-oriented design principles used in SOA to achieve separation of concerns across in-memory service modules.
    Service interface
    A service interface in SOP is an in-memory object that describes a well-defined software task with well-defined input and output data structures. [...] An SOP service interface can be externalized as a WSDL operation and a single service or a package of services can be described using WSDL. Furthermore, service interfaces can be assigned to one or many service groups based on shared properties.
    In SOP, runtime properties stored on the service interface metadata serve as a contract with the service virtual machine (SVM). [...]
    Service invoker
    A service invoker makes service requests. It is a pluggable in-memory interface that abstracts the location of a service producer as well as the communication protocol, used between the consumer and producer when going across computer memory, from the SOP runtime environment such as an SVM. The producer can be in-process (i.e. in-memory), outside the process on the same server machine, or virtualized across a set of networked server machines. The use of a service invoker in SOP is the key to location transparency and virtualization. Another significant feature of the service invoker layer is the ability to optimize bandwidth and throughput when communicating across machines. [...]
    Service listener
    A service listener receives service requests. It is a pluggable in-memory interface that abstracts the communication protocol for incoming service requests made to the SOP runtime environment such as the SVM. Through this abstract layer, the SOP runtime environment can be virtually embedded within the memory address of any traditional programming environment or application service.
    Service implementation
    In SOP, a service module can be either implemented as a Composite or Atomic service. It is important to note that Service modules built through the SOP paradigm have an extroverted nature and can be transparently externalized through standards such as SOAP or any proprietary protocol.
    Semantic-based approach
    One of the most important characteristic of SOP is that it can support a fully semantic-based approach to programming. Furthermore, this semantic-based approach can be layered into a visual environment built on top of a fully metadata-driven layer for storing the service interface and service module definitions. Furthermore, if the SOP runtime is supported by a SVM capable of interpreting the metadata layer, the need for automatic code generation can be eliminated. The result is tremendous productivity gain during development, ease of testing and significant agility in deployment.
    Service implementation: composite service
    A composite service implementation is the semantic definition of a service module based on SOP techniques and concepts. If you look inside of a black-boxed interface definition of a composite service, you may see other service interfaces connected to each other and connected to SOP programming constructs. A Composite service has a recursive definition meaning that any service inside ("inner service") may be another atomic or composite service. An inner service may be a recursive reference to the same containing composite service.
    Programming constructs
    SOP supports the basic programming constructs for sequencing, selection and iteration as well as built-in, advance[d] behavior. Furthermore, SOP supports semantic constructs for automatic data mapping, translation, manipulation and flow across inner services of a composite service.
    Sequencing
    A service inside of the definition of a composite service (an "inner service") is implicitly sequenced through the semantic connectivity of built-in success or failure ports of other inner services with its built-in activation port. When an inner service runs successfully, all the inner services connected to its success port will run next. If an inner service fails, all the services connected to its failure port will run next.
    Iteration
    [...] Furthermore, any service interface can automatically run in a loop or "foreach" mode, if it is supplied with two or more input components upon automatic preparation. This behavior is supported at design-time when a data list structure from one service is connected to a service that takes a single data structure (i.e. non-plural) as its input. If a runtime property of the composite service interface is declared to support "foreach" in parallel, then the runtime automation environment can automatically multi-thread the loop and run it in parallel. This is an example of how SOP programming constructs provide built-in advanced functionality.
    Data transformation, mapping, and translation
    Data mapping, translation, and transformation constructs enable automatic transfer of data across inner services. An inner-service is prepared to run, when it is activated and all of its input dependencies are resolved. All the prepared inner-services within a composite service run in a parallel burst called a "hypercycle". This is one of the means by which automatic parallel-processing is supported in SOP. The definition of a composite service contains an implicit directed graph of inner service dependencies. The runtime environment for SOP can create an execution graph based on this directed graph by automatically instantiating and running inner services in parallel whenever possible.
    [...]
    Transactional boundary
    A composite service can be declared as a transaction boundary. The runtime environment for SOP automatically creates and manages a hierarchical context for composite service objects which are used as a transaction boundary. This context automatically commits or rollbacks upon the successful execution of the composite service.
    Service compensation
    Special composite services, called compensation services, can be associated with any service within SOP. When a composite service that is declared as a transaction boundary fails without an exception handling routing, the SOP runtime environment automatically dispatches the compensation services associated with all the inner services which have already executed successfully.
    Service implementation: atomic service
    An atomic service is an in-memory extension of the SOP runtime environment through a service native interface (SNI) it is essentially a plug-in mechanism. For example, if SOP is automated through an SVM, a service plug-in is dynamically loaded into the SVM when any associated service is consumed. An example of a service plug-in would be a SOAP communicator plug-in that can on-the-fly translate any in-memory service input data to a Web Service SOAP request, post it to a service producer, and then translate the corresponding SOAP response to in-memory output data on the service. Another example of a service plug-in is a standard database SQL plug-in that supports data access, modification and query operations. A further example that can help establish the fundamental importance of atomic services and service plug-ins is using a service invoker as a service plug-in to transparently virtualize services across different instances of an SOP platform. This unique, component-level virtualization is termed "service grid virtualization" in order to distinguish it from traditional application, or process-level virtualization.
    [...]
    Service instrumentation
    The SOP runtime environment can systematically provide built-in and optimized profiling, logging and metering for all services in real-time.
    Declarative & context-sensitive service caching
    Based on declared key input values of a service instance, the outputs of a non time-sensitive inner service can be cached by the SOP runtime environment when running in the context of a particular composite service. When a service is cached for particular key input values, the SOP runtime environment fetches the cached outputs corresponding to the keyed inputs from its service cache instead of consuming the service. Availability of this built-in mechanism to the SOP application developer can significantly reduce the load on back-end systems.
    [...]
    Inter-service communication In addition to the ability to call any service, Service Request Events and Shared Memory are two of the SOP built-in mechanisms provided for inter-service communication. [...]
    Service overrides
    In SOP, customizations are managed through an inventive feature called Service Overrides. Through this feature, a service implementation can be statically or dynamically overridden by one of many possible implementations at runtime. This feature is analogous to polymorphism in object-oriented programming. [...]
    Consumer account provisioning
    Select[ed] services can be deployed securely for external programmatic consumption by a presentation (GUI) layer, or other applications. Once service accounts are defined, the SOP runtime environment automatically manages access through consumer account provisioning mechanisms.
    Security
    The SOP runtime environment can systematically provide built-in authentication and service authorization. For the purpose of authorization, SOP development projects, consumer accounts, packages and services are treated as resources with access control. In this way, the SOP runtime environment can provide built-in authorization. Standards or proprietary authorization and communication security is customized through service overrides, plug-in invoker and service listener modules.
    Virtualization and automatic multithreading
    Since all artifacts of SOP are well-encapsulated services and all SOP mechanisms, such as shared memory, can be provided as distributable services, large-scale virtualization can be automated by the SOP runtime environment. Also, the hierarchical service stack of a composite service with the multiple execution graphs associated to its inner services, at each level, provides tremendous opportunities for automated multi-threading to the SOP runtime environment.

    History
    The term service-oriented programming was first published in 2002 [in the document titled "Service Oriented Programming: a New Paradigm of Software Reuse" and publicated] in [the] book called "Software Reuse: Methods, Techniques, and Tools." SOP, as described above, reflects some aspects of the use of the term proposed by [the authors of said document (quoted above)].
    Today, the SOP paradigm is in the early stages of mainstream adoption. There are four market drivers fueling this adoption:

  • Multi-core Processor Architecture: due to heat dissipation issues with increasing processor clock speeds beyond 4 GHz, the leading processor vendors such as Intel have turned to multi-core architecture to deliver ever increasing performance. Refer to the article "The Free Lunch Is Over" This change in design forces a change in the way we develop our software modules and applications: applications must be written for concurrency in order to utilize multi-core processors and writing concurrent programs is a challenging task. SOP provides a built-in opportunity for automated multithreading.
  • Application Virtualization: SOP promotes built-in micro control over location transparency of the service constituents of any service module. This results in automatic and granular virtualization of application components (versus an entire application process) across a cluster or grid of SOP runtime platforms.
  • Service-oriented architecture (SOA) and demand for integrated and composite applications: in the beginning, the adoption of SOP will follow the adoption curve of SOA with a small lag. This is because services generated through SOA can be easily assembled and consumed through SOP. The more Web services proliferate, the more it makes sense to take advantage of the semantic nature of SOP. On the other hand, since SOA is inherent in SOP, SOP provides a cost-effective way to deliver SOA to mainstream markets.
  • Software as a service (SaaS): capabilities of the current SaaS platforms cannot address the customization and integration complexities required by large enterprises. SOP can significantly reduce the complexity of integration and customization. This will drive SOP into the next generation SaaS platforms.

    History [(first version publicated on the 19th of December 2007)]
    SOP was first conceived in 1994 by two siblings, Ash Massoudi and Sandra Zylka, while studying computer science at UC Berkeley and computer simulations in neurobiology at California Polytechnic State University. The construction of the first SOP platform, the Hyperservice Business Platform, started in the year 2000 by NextAxiom Technology Inc. (NXA) dedicated to manifesting SOP as a mainstream practice in process automation and business application development.
    In 2002, Carroll Pleasant at Eastman Chemical Company successfully deployed the first version of the Hyperservice Business Platform to address the performance challenges of a SOA implementation. In 2004, John Seely Brown, was the first technology luminary to recognize the potential of SOP as a "paradigm in agility" that can bring "reusability to a level never seen before". Between 2003 and 2004, NXA described its patent-pending innovations in SOP to top executives and technologists from PeopleSoft (now Oracle) and SAP AG and encouraged them to embed NXA's SOP platform. In early 2005, Indus International (now Ventyx), a worldwide leader of asset management and service delivery software, was the first software company to embed the Hyperservice Business Platform. In 2007, after 7 years of stealth operation, NXA started the process of public exposure of its innovations.
    Today, the SOP paradigm is in the early stages of mainstream adoption. There are four market drivers fueling this adoption:

  • Multi-core Processor Architecture: due to heat dissipation issues with increasing processor clock speeds beyond 4 GHZ, the leading processor vendors such as Intel have turned to multi-core architecture to deliver ever increasing performance. Refer to the article "The Free Lunch Is Over" This change in processor architecture forces a change in the way we develop our software modules and applications: applications must be written for concurrency in order to utilize multi-core processors and writing concurrent programs is a challenging task. SOP provides a built-in opportunity for automated multi-threading.
  • Application Virtualization: SOP promotes built-in micro control over location transparency of the service constituents of any service module. This results in automatic and granular virtualization of application components (versus an entire application process) across a cluster or grid of SOP runtime platforms.
  • Service-oriented architecture (SOA) and demand for integrated and composite applications: in the beginning, the adoption of SOP will follow the adoption curve of SOA with a small lag. This is because services generated through SOA can be easily assembled and consumed through SOP. The more Web services proliferate, the more it makes sense to take advantage of the semantic nature of SOP. On the other hand, since SOA is inherent in SOP, SOP provides a cost-effective way to deliver SOA to mainstream markets.
  • Software as a Service (SaaS): [...]."]

    Comment
    The quoted webpage was publicated on the 19th of December 2007 and is marked with a box saying: "This article does not cite any sources. [...] Unsourced material may be challenged and removed.".
    Indeed, the webpage is manipulated and mixes SOP 1.0B with features of SOP 1.0A and our Evoos and OS (see the quotes about Jini, Openwings, and Automated Information Router above). This is always the case when the subject matter is about semantics and Virtual Machine (VM). For example, the Service-ORiented Computing EnviRonment (SORCER) is based on SOC 1.0 and SOP 1.0 but is not related to the Ontology-Oriented (OO 2) modeling and programming paradigm. Also, the cited document about the Automated Information Router (AIR) does not mention a service invoker and a fully semantic-based approach, but only a controller and "languages that code semantic" suggesting the integration of something like a Domain-Specific Language (DSL) and SOP 1.0, but not the Semantic (World Wide) Web (SWWW) standards and technologies, including ontologies, which was only upcoming in 2002. This suggests that Sematic SOP (SSOP) or SOP 2.0 was also introduced with our OS after Evoos introduced the basic parts of SOP 1.0 before.
    Multi-threading was not a big concern at the time when Jini was presented, as is also noted with the multi-processors. Specifically adaptive multi-threading and other such functions are not mentioned in any quoted document with the exception of the material about our OS (see also the comment point before).
    Another example for the accussed activity of manipulation and mixing is the atomic service and the security, which reminds us of our OntoBot (OB) and Ontologic File System (OntoFS) components integrated by our Ontologic System Architecture (OSA) with the other Ontologic System Components (OSC), and Ontologic Applications and Ontologic Services (OAOS), including technologies, applications, and services working distributed, virtualized, or on-the-fly.
    Note the different contents of the first and latest versions of the chapter History, which addresses the features added by our OS to our Evoos and SOC, SOP, and SOA.
    Ash Massoudi is the person behind NextAxiom, which should give an indicator, who is also bumbling around here all the time, like Michael Sobolewski, who is the person behind the Service-ORiented Computing EnviRonment (SORCER), and other entities.
    "A mission critical factor of a system is any factor (component, equipment, personnel, process, procedure, software, etc.) that is essential to business operation or to an organization. Failure or disruption of mission critical factors will result in serious impact on business operations or upon an organization, and even can cause social turmoil and catastrophes.[1]" But here only mission critical business applications.
    The description of the SORCER in the online encyclopedia references this description about SOP 1.0B despite being based on Jini and JavaSpaces and also Rio respectively SOC 1.0 and SOP 1.0A and hence also Evoos.

    SOA vs. Microservices
    1st comparison

    We already quoted an online encyclopedia about SOA and mSOA in the Clarification of the 18th of January 2020.
    We quote a webpage about microservices publicated by a first company in the fields of software technology owned by developers also known from the field of software pattern, who are said to be the ones who coined the designation: "Microservices [] a definition of this new architectural term
    The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.
    [...]
    In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

    Componentization via Services
    [...]
    [...] Our definition is that a component is a unit of software that is independently replaceable and upgradeable.
    Microservice architectures will use libraries, but their primary way of componentizing their own software is by breaking down into services. We define libraries as components that are linked into a program and called using in-memory function calls, while services are out-of-process components who communicate with a mechanism such as a web service request, or remote procedure call. [...]
    [...]
    Using services like this does have downsides. Remote calls are more expensive than in-process calls, and thus remote APIs need to be coarser-grained, which is often more awkward to use. If you need to change the allocation of responsibilities between components, such movements of behavior are harder to do when you're crossing process boundaries.
    At a first approximation, we can observe that services map to runtime processes, but that is only a first approximation. A service may consist of multiple processes that will always be developed and deployed together [...].
    [...]

    Organized around Business Capabilities
    [...]

    Smart endpoints and dumb pipes
    [...]
    The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible - they own their own domain logic and act more as filters in the classical Unix sense - receiving a request, applying logic as appropriate and producing a response. These are choreographed using simple RESTish protocols rather than complex protocols such as WS-Choreography or BPEL or orchestration by a central tool.
    The two protocols used most commonly are HTTP request-response with resource API's and lightweight messaging[8]. The best expression of the first is
    Be of the web, not behind the web
    -- Ian Robinson
    Microservice teams use the principles and protocols that the world wide web (and to a large extent, Unix) is built on. Often used resources can be cached with very little effort on the part of developers or operations folk.
    The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don't do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services.
    In a monolith, the components are executing in-process and communication between them is via either method invocation or function call. The biggest issue in changing a monolith into microservices lies in changing the communication pattern. A naive conversion from in-memory method calls to RPC leads to chatty communications which don't perform well. Instead you need to replace the fine-grained communication with a coarser-grained approach.

    Decentralized Governance
    [...]

    Decentralized Data Management
    [...]

    Infrastructure Automation
    [...]

    Design for failure
    [...]

    Evolutionary Design
    [...]

    [Box:] Microservices and SOA
    When we've talked about microservices a common question is whether this is just Service Oriented Architecture (SOA) that we saw a decade ago. There is merit to this point, because the microservice style is very similar to what some advocates of SOA have been in favor of. The problem, however, is that SOA means too many different things, and that most of the time that we come across something called "SOA" it's significantly different to the style we're describing here, usually due to a focus on ESBs used to integrate monolithic applications.
    [...] (Any time you need an ontology to manage your ontologies you know you are in deep trouble.)
    This common manifestation of SOA has led some microservice advocates to reject the SOA label entirely, although others consider microservices to be one form of SOA [7], perhaps service orientation done right. Either way, the fact that SOA means such different things means it's valuable to have a term that more crisply defines this architectural style."]

    Comment
    ...
    Interestingly, the authors call the variant microservice architecture without the oriented of service-oriented. Nevertheless, our works have titles, which are legally required.

    SOA vs. Microservices
    2nd comparison

    We quote a webpage of a second company in the fields of software technology about microservices: "Service Oriented Architecture (SOA) vs. Microservice
    In a sense, microservices are simply SOAs - i.e., service-oriented architecture - reinvented.
    SOAs arose around the early 2000s. The development teams at that time had separated each service, and then connected them through a service bus or a network. [Not quite right the first is SOA 1.0 the second is SOC 1.0 and SOP 1.0]
    However, developers then started to combine SOAs in an attempt to reduce the lag between service bus calls. They found that placing those SOAs together made the application and its services run quicker - i.e., one artifact instead of several.
    This resulted in monolithic applications. But again, developers found that as monoliths grew in terms of features and capabilities, they became more complicated - and costly - to maintain.
    As a result, they broke the monoliths apart again, hence the rise of microservices. [As shown here it is the SOA combined with SOC and SOP.]

    Understanding SOA
    Simply put, you can look at SOAs as individual services running over a bus or communications protocol over a network. Each of the application's services are loosely coupled, but they speak to one another through a messaging protocol.

    Understanding Microservices Applications
    On the other hand, microservices - again, 'SOA 2.0' so to speak - are independent services, but they communicate with each other directly using lightweight protocols, such as HTTP.
    The major difference between SOA and microservices is that each service on the SOA relies on one bus. If you overload the bus, then your application is at risk of crashing. Microservices could speak across multiple interconnected pathways. Unlike SOA, there's no single point of failure.

    Microservices vs. SOA: Which is Right for Your Needs?
    To be clear, you shouldn't look at SOA as an alternative to microservices; rather, microservices have superseded SOA as the method of managing multiple services in an application.
    Why Microservices?
    So, why would you select microservices over monolithic and SOA applications?
    Decoupled
    Each microservice is independent. Therefore, it's easier to test and deploy as it's a standalone entity. Whatever happens to the service, happens to it alone without harming the application as a whole. This aspect also helps you prevent your application from cascading failure.
    Performance
    You can scale each microservice independently. So if one feature is getting more demand and, in turn, you need more resources, you can allocate more of the specific resources that service needs. So if the service just needs more memory, you can spin-up more memory - there's no need to spin up an entire virtual machine (VM) for one thing.
    Team Organization
    With microservices, you can also scale the development side. You could assign a team to back one service and, in turn, have multiple teams working in parallel on the application. You can get more development work done in a shorter period of time.
    Reduce Technical Debt & Barrier to New Technologies
    If a new language or technology comes out, you can replace a specific microservice with a new one. There's no need to re-write the whole application to just update or add a few features.

    Why SOA? (... no seriously, why?!)
    In less common situations, you might need to consider SOA.
    You could have a scenario where the organization's other architecture was already built on SOA, so you'll need to develop atop of it. However, these are niche cases. You should look at SOA as an obsolete application architecture.
    If you have modest development needs, such as a back office application, then you may be better off developing a no-frills monolithic application than adopting SOA.
    Otherwise, if scaling, adding new features, and being cloud-native are on your roadmap, then you should move ahead with microservices. Yes, there's a high upfront cost, but you'll save in the way of lower hosting costs (via managed cloud services) and lower technical debt."

    Comment
    Microservices are viewed as SOA 2.0, but ...

  • SOA 1.0 is inherent in SOP 1.0A, and
  • SOP 1.0A is inherent in mSOA,

    which suggests that the designations microService-Oriented Computing (mSOC) and microService-Oriented Programming (mSOP) would be the better characterizations and designations.

    Summary
    Working this out resulted in an overall combinatorial explosion.

    SOC 1.0 and SOP 1.0A are based on the Peer-to-Peer (P2P) and tuple space models (see Jini and Openwings).
    Microservices are cloud-native [What is Service Object-Oriented Architecture (SOOA) and ... and a federated method invocation, and federated Service-Oriented Computing (SOC) and front-end federated Service-Oriented Programming (SOP)?]

    Autonomic Computing (AC) is a feature that can be found in SORCER as well, but also in Evoos.

  • mapping SOA 1.0 on P2P results in mSOA but SOC 1.0 and SOP 1.0A are already using P2P
  • applying SOA on microkernel and kernel services results in mSOA
  • mapping of microkernel kernel services running in user space on microservices
  • Semantic Service-Oriented Computing (SSOC) (Semantic (World Wide) Web (SWWW) standards and technologies with Service-Oriented Computing of the first generation (SOC 1))
  • Semantic Service-Oriented Architecture (SSOA) (Semantic (World Wide) Web (SWWW) standards and technologies with Service-Oriented Architecture of the first generation (SOA 1.0))
  • Service-Oriented Architecture of the second generation (SOA 2.0) (Service-Oriented Architecture of the first generation (SOA 1.0) with (AC) or Service-Oriented Computing of the first generation (SOC 1) with Service-Oriented Architecture of the first generation (SOA 1.0))
  • SOA 2005/2006 (Semantic (World Wide) Web (SWWW) standards and technologies with Service-Oriented Architecture of the second generation (SOA 2.0) or Semantic Service-Oriented Architecture (SSOA) with Autonomic Computing (AC) or ... or ...)

    See also The Proposal about our Evolutionary operating system (Evoos) for the following chapters:

  • chapter 2.2.1 Components of Operating Systems
    "Among the components of most operating systems are (according to [Sil[berschatz, Galvin: Operating System Concepts. ...],1994]):
    • the management of processes,
    • the administration of the main memory,
    • the management of non-volatile memory,
    • the file management,
    • the security system,
    • the network functions and
    • the system of the command interpreter"
  • chapter 2.2.2 Services of an Operating System
    "The services of an operating system can be divided into two sets:
    • The first set mainly supports a user in the role of a software programmer. It contains services for program execution, file system manipulation, and communication, and also for input/output operations and error detection.
    • The second set does not specifically serve one user, but ensures the efficient execution of the operating system in multi-user operation. It includes the services for resource allocation, billing for resources used, and the execution of security measures."
  • chapter 2.4 Virtual Machine
  • chapter 2.6 Negative Characteristics of Operating Systems
    "Generally speaking, an operating system is a complex software. This is difficult to maintain and change, so repairs tend to destroy the software system structure and to increase the entropy in a software system (according to [Vli[ssides, John M.; Coplien, James O.; Kerth, Norman L.: Pattern Languages of Program Design 2. ...], 1996]). Furthermore, software evolution is not predictable or determinable, since future hardware and software technological requirements are unknown.
    Much more serious is the fact that user requirements are generalized in the development of operating systems. If a user has to perform more specific tasks than an operating system allows, she or he must either extend the used operating system or even change the operating system. As a result, the user comfort of a computer system suffers [...].
    Moreover, in future "The value of computer systems [...] will be judged not by how well they are suited to the one purpose for which they were designed, but by how well they are suited to applications for which they were never intended". (see [Kay[, Alan: Computer Software. ...], 1984]). Today's operating systems do not have this required flexibility at all, so that a realization of such computer systems is not possible at present."
  • chapter 2.7 New Requirements for Operating Systems from the Perspective of Software Technology
    "[...] If an operating system is again regarded as complex software, these new requirements should be realized from the point of view of software technology by considering software development patterns (see [Vli, 1996]). Of particular interest are the software development patterns software tectonics, metamorphosis and flexible foundations.
    2.7.1 Software Tectonics
    The software development pattern software tectonics emphasizes that software systems are also subject to a certain evolution. Accordingly, it requires to adopt a suitable view on software on the part of the developer. The software development pattern is given by the following keywords:
    • Evolution or death
    • Evolution not revolution
    • Do not construct software, but let it "grow up"
    • Cyclic and incremental development process

    2.7.2 Metamorphosis
    The software development pattern metamorphosis emphasizes the omnipresence of the demand for flexibility of software and the problem that this demand can partly only be solved by using dynamic methods. The following keywords describe the software development pattern:

    • Metainformation
    • Change rules
    • Change service
    • Dynamic scheme
    • Dynamic languages
    • Removal of "superfluous"
    • Late binding

    2.7.3 Flexible Foundations
    This development pattern provides information about design features of software systems that enable the resistance of these systems against changes. It should also help to cope with the requirement for continuous and incremental software evolution, as set out on the part of the development pattern software tectonics. The three most important keywords of the software development pattern flexible foundations are:

    • Open architecture
    • Open implementation
    • Co-evolution of an operating system and its foundations"
  • chapter 3.2 Functioning of a Brain
    "According to these findings, the model of a permanently connected network is no longer tenable (see [Chevallier]).
    [...]
    The idea of a modern computer-like operating system of a brain is linked to the proposal of Johnson-Laird (see [Joh[nson-Laird, Philip: Mental minds. ...], 1983]). The designed system corresponds to a self-organizing neural network. Possible deficits in a subnetwork do not stop the actions occurring in the whole network."
  • chapter 5 Summary
    "[...]
  • the hearing - the microphone, the network card and the modem
  • [...]
  • the speaking - the loudspeaker, the network card and the modem"

    With the only exception of the tuple space pattern, model, architecture, or paradigm, which is implemented with JavaSpaces included in Jini, there is no SoftBionics (SB), Mediated Reality (MedR), Big Data Processing (BDP), Data Science and Analytics (DSA), and so on.
    As can be easily seen even better now, C.S. put everything on the next generation and integrated the old generation and the next generation of everything all in one with the Ontologic System (OS) and the Ontoscope (Os), which is a part of the OS.

    Nevertheless, our claim still holds that the foundations were not considered by SOx.

    What does the overall result mean? Simply, we have shown that at the very basic level of ICT and engineering and even the modern societies is only Jini and our Evoos. Indeed, a little strange is now for us why Jini is based on the blackboard metaphor, pattern, or model, specifically a tuple space, which is related to Artificial Intelligence (AI) and Associative Memory (AM). Evoos doubly literally spoken includes the DNA of SOC and SOP, microservices, cloud computing, IaaS, PaaS, and SaaS, SDx, NFV, and so on, and the OS adds even more.
    Howsoever, Jini makes only real sense together with the features added by Openwings. But some of the most important of these features are already included in our Evoos as well as the teaching by the author of Openwings that this is the case indeed without naming our Evoos (keyword IPC, see above for details).
    The immediate implication is quite simple: The foundations of the already revolutionary SOP are parts of our even more revolutionary Evoos. We even have to ask the question if the term Service-Oriented Computing has its origin in our Evoos, because it is still called kernel service but not kernel application. And at this point an incredible huge avalanche of implications is set off that includes cloud computing, even Windows .NET and Amazon Web Services (AWS) Elastic Compute Cloud (EC2), microservices or microService-Oriented Computing, microService-Oriented Architecture, and so on.
    In retrospect, it can be seen that it would be very hard or even was impossible to avoid a causal link with our Evoos and our OS including Evoos since 2006, when we added even more revolutionary things. This has consequences when looking at licensing our AWs and IPs as already discussed in the latest issues of our SOPR.
    It simple says that not the modern ICT and engineering sectors are us but the digitial world and due to the transition of the whole world in this direction ....

    Btw.: Please note that this clarification and the other referenced clarifications are about the prior art and therefore are legally binding and constitute the foundation for the decisions made in relation to the legal matter concerning our SOPR and our other managing and collecting socities.


    30.January.2020

    Comment of the Day

    Greta energy&153;
    Greta travel&153;
    Greta fund&153;
    Green Greta&153;
    Great Greta&153;

    Blitz style&153;

    Ontonics Further steps

    Only some last slots (not bets) of our 1 trillion USD funds

  • OntoLab Vision Fund I and
  • Blitz Fund I

    under the lead of our Hightech Office Ontonics are unplaced.
    While it takes some more time to complete the formal and legal matters, we have done some few technical and operational preparations before we let our Superunicorns off the leash and our Superbolts off the grid, so that they can dash their first blitzes.

    We are also working on some surprises for those of our fans and readers who are bored again. These are related to the

  • expansion and strengthening of our management,
  • establishment of an extremely competent marketing department, and
  • selection of some more takeover candidates

    to enter and redefine the competition and market sectors in our uncomparable Blitz style once again for advancing our enterprise beyond ludicrous growth.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer