Home → News 2019 January
 
 
News 2019 January
   
 

01.January.2019
New Year 2019
The OntomaX team wishs our friends, supporters and fans a happy new year.


04.January.2019
Ontonics Further steps
We are very delighted to inform the public that our unforeseeable and unexpected, original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S. was chosen as the backbone of civilzation and that our already incredible success story is continuing.

For example, one of the many uncommon intentions behind and characteristic expressions of our Ontologic System is its specific nature of a belief system that provides the objective truth in reality and virutality as well as our total fusion of them as New Reality (NR), which is manifested by our Ontoverse (Ov), so that users can still trust what they see, hear, smell, taste, or feel.
Needless to say, this is also the reason why the wild press, the politics, and other entities acting in the grey zone and beyond are not allowed in the 1st to 4th rings of the management structure and the assigned ID spaces of the IDentity Access and Management System (IDAMS) structure of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV).

A second example is the fields of Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES) including

  • Industrial Internet of Things (IIoT) and
  • Industry 4.0.

A third example is the field of SoftBionics (SB), including

  • Artificial Intelligence (AI),
  • Machine Learning (ML),
  • Computer Vision (CV),
  • Cognitive Vision (CogV),
  • Cognitive Software Agent System (CSAS),
  • Cognitive Computing (CogC),
  • and so on,

    specifically its integration with the fields of

  • cloud computing,
  • mobile computing,
  • CPS, IoT, and NES, and
  • Big Data Processing (BDP).

    Just another example is the field of cyber security, specifically utilizing our field of SoftBionics (SB) for the management and protection of real and virtual systems, as done with for example the 1st ring of the management system of our ON, OW, and OV.

    And one more example is the field of healthcare, specifically data-driven healthcare with an emphasis on prediction and prevention rather than cure.

    At this point we have shown once again why

  • our Ontologic System and our Ontoscope,
  • their architectures, and
  • every single feature and functionality of them even the ones related to operating systems and data stores

    are proctected by the copyright and other rights and are not allowed to be reproduced or performed in a way that is not allowed by C.S., specifically under a license that is not accredited by C.S..


    05.January.2019

    07:42 UTC+1
    Clarification

    *** Proof-reading mode ***
    Besides the companies of the automotive industry sector, many other companies are presenting technologies, products, and services based on our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S..

    One example is a smart speaker with a radar-like system, which lets detect the approach of an owner and could be used to trigger diary reminders or to make the smart speaker adjust its volume according to how close the person is. Obviously, it is just another variant of and Ontoscope also presented by for example the company Amazon with another kind of sensor.

    Another example "is a kind of [(meta)] smart assistant for smart assistants, which issues commands on the behalf of a user to for example Amazon Alexa or Google Assistant based on past behaviour" is basically an essential part of our

  • OntoBot,
  • Ontologic Applications and Ontologic Services (OAOS), and also
  • moderator system, dynamic federation system, and service meshing system of the infrastructure of our Society for Ontological Performance and Reproduction (SOPR) (see the issue SOPR #142 of the 1st of October 2018),

    which has absolutely no chance to get an allowance or even a license for working together with our Ontologic System (OS) and our Ontoscope (Os), specifically for operating in our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), as is the case with Free and Open Source Software (FOSS) publicated under a license that is not accredited by our SOPR.

    But there is still room at the bottom. A company has presented a virtual helper designed for use in cars respectively a "proactive contextual AI-enabled digital [...] intelligent automotive assistant technology in the voice recognition space". But at this point it did not stop with infringing our rights and was even so incredible bold and only cheap, stupid, and daft to call that plagiarism Chris in relation to the explanations given in the section History of the webpage Overview of the website of OntoLinux.
    Our legal team is already working on this infringement of the right of personality of C.S., because the character string "CHRIS" constitutes a causal link with our truly original and unique works and innovations, and is not merely used as a designation or mark for a product or service.

    Btw.: Every major hardware and software as well as vehicle manufacturer already has an own copy of our assistant technology or another essential element of our OS or Os.
    Some labs are really not smart, as we do know already in the case of some media entities as well.

    07:25 and 13:44 UTC+1
    More evidences Hyundai mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The marque Hyundai→Kia has presented an vehicle interior with the so-called Space of Emotive Driving interactive concept and the so-called Real-time Emotion Adaptive Driving (R.E.A.D.) technology. From a related report of a leading automotive media company specialized on the publication of fabricated news and fake news we got the following informations: "Kia presents [...] a technic, which should allow the automatic emotion-oriented [or sensitive] control of the design of the vehicle interior [or passenger compartment]. [...] For this Kia developed with the "Affective Computing" group of the Media LAB of the Massachusetts Institute of Technology (MIT) a technology, which is called "Real-time Emotion Adaptive Driving". The system recognizes bio-signales and draws conclusions on the emotional state of the driver on the basis of Artificial Intelligence (AI). Concretely, sensors recognize the facial expression and measure the skin conductance (Electrodermal Activity, EDA) and the pulse among other things. With the AI technology deep learning legt das System a standard in relation to the user Nutzerverhalten of the passengers Insassen fest und erkennt auf dieser Grundlage Verhaltensmuster und Neigungen. Correspondingly, the design of the interior takes place [...].
    A complete automatic control on the basis of emotion recognition would be new.
    [...]
    In addition, Kia presents [...] a system for the recognition of touch gestures. Thereby, a 3D camera observes the eyes and the fingertips of the passenger. In doing so, the passengers can make adjustments by finger gestures for the illumination, the air conditioning [or climate control], the entertainment system, and the ambience of the interior space [or cabin environment]. According to Kia, the functions are managed through an "unobtrusive" Head-up Display.
    [...]
    Also [not] new: a music control for the seats. According to Kia, the passengers should not only hear but also feel the music by adjusting the vibrations of the seats to the audio frequencies. [...] The seats are also equipped with massage programs and in combination with assistance system a warning vibration function."

    By the way, the sensors were a 2D webcam, which followed a pupil of a user, and sensors, which were integrated in the buttons of a computer mouse, when we got knowledge about the field of affective computing and the project Blue Eyes the first time around the years 1998 and 1999.
    Needless to say, the author of the report does know our publications, as it is also already proven with his other fabricated and publicated reports, and has lied about the novelty of that plagiarism to mislead the public about our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S., and also for the commercial advantage of the publisher, as usual.

    Simply take a look at the

  • chapter 6 Ausblick==Outlook of The Proposal, which describes the predecessor of our Ontologic System (OS) with our Evolutionary operating system (Evoos) and also references the field of affective computing at its end "[j]üngere Forschungsprojekte am Media Lab des Massachusetts Institute of Technology und dem Almaden Lab des Unternehmens [IBM] versuchen mittlerweile Computer-Systeme zu entwickeln, die dem Benutzer mehr Aufmerksamkeit geben. Erste prototypische Systeme können zum Beispiel auf die körperliche, als auch auf die emotionale Verfassung eines Benutzers eingehen==[y]ounger researcher projects at the Media Lab of the Massachusetts Institute of Technology and the Almaden Lab of the company [IBM] are attempting to develop computer systems in the meantime, that give the user more attention. First prototypical systems are able to react on the physical as well as the emotional condition of a user for example",
  • section Integrating Architecture of the webpage Overview of the website of OntoLinux,
  • webpage Ontologic Applications of the website of OntoLinux,
  • section Multimedia of the website of OntoLinux,
  • section Exotic Operating System of the website of OntoLinux, where also Real-Time operating systems (RToss) are referenced,
  • webpage Environment of the website of Style of Speed, which clearly says that "we have developed several visual and multimodal digital interior and exterior environments for vehicles, that are powered by OntoLix and OntoLinux" and lists our solutions

    and

  • webpage Active Interior, which clearly says that "[t]he Active Interior provides uncountable many possibilities to realize appealing applications, like for example the following: [...] With Active Actuators for example the [...] screens of the entertainment, information, and navigation systems, as well as other interior parts are extend- and retractable, or/and moved to the favored positions." and "The Active Camera [...] can also be used for monitoring the interior in an intelligent way to control other Active Components [...].", and lists our

    and also note that

  • our Space@Car is based on our Multilingual Multimodal Multiparadigmatic Multidimensional Multimedia User Interface (M⁵UI) based on Emotional Intelligence (EI) and the Ontoscope Components (OsC) and the Ontoscope Architecture (OsA), which has all kinds of sensors, and
  • other companies have already presented interior concepts based on our original and unique Ontologic System in the past as well like for example
    • Volkswagen→Audi with the Affective Intelligent Driving Agent (AIDA) (see its case in the Investigations::Car #364 of the 28th of October 2012),
    • Toyota with its AI and EI based agent variant of our OntoBot (see its case in the Investigations::Car #402 of the 5th of January 2017),
    • Volkswagen with another interior concept based on our OS, and
    • Tata→Jaguar Land Rover with its so-called Sixth Sense Projects.

    Therefore, it is easy to understand that Kia has infringed our copyright and other rights of C.S. and our corporation.
    Anyway, Kia has to sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well.

    07:25, 15:50, and 21:19 UTC+1
    More evidences Nissan mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***

    Obviously, Nissan has found out a new unexpected and unforeseeable functionality of our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. From a related report of a leading automotive media company specialized on the publication of fabricated news and fake news we got the following informations: "On this year's [consumer electronics exhibition] Nissan presents its technological future prospects. The focus is on the new [our] technology platform [here called] "See the Invisible", which suggests an improved networking of the vehicles among each other and with the Internet."
    From another related report of an other leading automotive media company specialized on the publication of fabricated news and fake news we got the following informations: "Nissan would like to prove with its [Invisible-to-Visible (]I2V[)] system that a car can look around the corner in the future. The study [...] shows an Augmented Reality [(AR)] solution, which projects a circular map into the viewing field of the driver. By means of sensors and data out of the cloud it should be possible for the vehicle to know what is happing in front of it. Thereby, the look around a corner of a house should be possible as well, which reduces the accident risk.
    From another report of a leading computer technology media company specialized on the publication of fabricated news and fake news we got the following informations: "Nissan announced Friday that it will display something called "invisible-to-visible" (i2V) technology at [a consumer electronic exhibition]. At its highest level, i2V crunches data from just about every source possible to give either a driver or an autonomous car a better idea of the world around the vehicle. It also involves a virtual world called the Metaverse, which is capable of beaming 3D avatars into the cabin for various tasks or just plain ol' company.
    Let's start with the data-crunching part, which is far more rooted in our current reality. Sensors both inside and outside the vehicle send information to Nissan's Omni-Sensing cloud, which can use that data to "map" a space around the car, highlighting pertinent information like road signs and pedestrians. That cloud data can be used later on when other vehicles enter the same area, giving them a bit of an advantage in knowing what's ahead. It can even suggest what lane to be in.
    And then there's the Metaverse, which is where it gets weird. This part of the i2V system is capable of beaming three-dimensional avatars into the vehicle. These avatars represent actual flesh-and-blood human beings, apparently. Nissan notes a few examples of how this Metaverse could be useful - for example, a professional driver avatar could ride shotgun and offer suggestions on being a better driver, or an avatar of a local could help road trippers discover places to eat in a town they've never been through. The Metaverse can also drop your friends or family into the car for a little company on a long, lonesome trip.
    Any information pertinent to the driver, whether it's related to the Omni-Sensing cloud's data gathering or the Metaverse avatars, will be displayed across the entire windshield.
    Of course, the technology to beam 3D avatars into actual vehicles doesn't exist quite yet, so Nissan's [exhibition] demonstration will require a little help. To experience the i2V system, visitors to Nissan's booth will need to don augmented-reality goggles that will show what the experience could be like."

    Needless to say, the author of the report does know our publications, as it is also already proven with his other fabricated and publicated reports, and has lied about the novelty of that plagiarism to mislead the public about our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S., and also for the commercial advantage of the publisher, as usual.
    At least, we got the next confirmation that our masterpieces were unforeseeable and unexpected, wired visons of pie-in-the-sky technology, which proves their protection by the copyright and other rights of C.S. and our corporation.

    We do not need to make a long discussion about the true origin of this technology. In fact, it is one of the more magic features and functionalities of our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S. based on its predictive capabilities, Caliber/Calibre, and all-encompassing knowledge about its Ontologic uniVerse (OV).
    Simply take a look at the

  • webpage Overview of the website of OntoLinux, specifically its sections
  • webpage Terms of the 21st Century of the website of OntoLinux, specifically its sections
  • section Network Technology of the website of OntoLinux, specifically the links to
    • Peer-to-Peer (P2P) computing and
    • grid computing, cloud computing, and edge computing,
  • webpage AutoSemantic::Car of the website of Style of Speed, specifically its feature "proactive-drive/predicts the future",
  • webpage Active Components of the website of Style of Speed, specifically its interplay on the basis of our Ontologic System (OS) with our
  • webpage Hyper Connectivity of the website of Style of Speed, specifically our computing platforms
  • webpage Environment of the website of Style of Speed, specifically our

    and also

  • systems based on our Ontologic System and Ontoscope, and related technologies, products, and services of for example the
    • marque Volkswagen→Audi (see its cases in the Investigations::Car #204 of the 5th of December 2009 and #393 of the 8th of October 2013),
    • collaboration of the company Robert Bosch and the marque Daimler→Mercedes-Benz Trucks with the so-called Predictive Powertrain Control system of Robert Bosch's assistance system Eco.Logic motion with electronic horizon,
    • marque Daimler→Mercedes-Benz (see its case in the Investigations::Car #387 of the 31st of May 2013),
    • collaboration of the companies Continental and International Business Machines (see their case in the Investigations::Car #393 of the 8th of October2013),
    • company Robert Bosch with its so-called VisionX concept study of a truck, and
    • consortium Here Technologies with its anticipatory data and sensor support for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving applications based on its Electronic Horizon (see its case in the Investigations::Multimedia, AI and KM of the 3rd of December 2017).

    Therefore, it is easy to understand that

  • we have our vision and New Reality (NR) literally spoken being made a reality and
  • Nissan has infringed our copyright and other rights of C.S. and our corporation with its next gatecrash. Obviously, Ghosn has created a very specific business culture and ethics around himself.

    Anyway, Nissan has to sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well.

    Btw.: We are wondering why journalists are

  • calling these ingenious elements of our iconic masterpieces titled Ontologic System and Ontoscope
    • "pie-in-the-sky tech",
    • "decades-away tech that features so many buzzwords it should be given an honorary marketing degree", and
    • "weird",

    and

  • talking about other nonsense like the question "How many small wonders does it take to replace one big revolutionary innovation?"

    when the industries and the politics are working on nothing else anymore than our Ontologic System (OS) and Ontoscope (Os).
    But what makes us much more worrying are all these very serious infringements of the rights of C.S. conducted by the press, because they constitute a frontal attack on the democracy and show once again that we have a serious problem with the press and its protection by constitutions of democratic states, indeed.

    By the way: The answer to the question "How many small wonders does it take to replace one big revolutionary innovation?" is "Infinity many in the case of our incredible huge revolutionary OS!"

    Furthermore, we would like to give the recommendations that large companies and their business partners and proxies

  • are cautious about what is presented as innovations and own innovations at the consumer electronics exhibition,
  • tell the press that it can stop its nonsense and harassments, because the OS is now here and the legal situation is now crystal clear as well, and
  • ban that meta smart assistant and that other Chris thing from your platforms, devices, vehicles, exhibitions, publications, and so on.

    Honestly, we are really curious if the SOPR survives this CES.

    10:36 and 22:34 UTC+1
    More evidences Volkswagen and Siemens mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The companies Volkswagen and Siemens collaborated once again in October 2018 for a system belonging to the field of the Internet of Things of the second generation (IoT 2.0), that comprises traffic lights at junctions and Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications respectively Vehicle-to-everything (V2X) communications technology, also called Car-to-Infrastructure (C2I) and Car-to-Car (C2C) respectively Car-to-X by Volkswagen as part of its Connected World concept (see also the case of the company Volkswagen in the Investigations::Car #380 of the 2nd of April 2013 and the Clarification of the 11th of January 2018). From a related report we got the following informations: "Traffic lights, which inform about the ideal speed for green phases, and sensors, which protect cyclists and pedestrians. VW and Siemens test [...], how crossings could become safer. [...] Ten traffic lights [...] will be refitted for the online operation by Siemens and VW in the first step. The data transmission is done via [the Wireless Local Area Network (WLAN) standard] WLANp. [...] It is sufficient, if the traffic light communicates with one car, which passes on the informations to a following vehicle and from there passed on again. [...] Moreover, additional sensors should be installed at two junctions, which warn car drivers of pedestrians and cyclists in a better way. Moreover, the sensors are able to gather data on site before the systems in the car recognize and process them." .
    Also note that the manufacturer Honda is working on the WLANp standard respectively IEEE 802.11p standard as well and we are sure that it uses it for its variant of our Swarm Intelligence System (SIS) instead of a Near-Field Communication (NFC) standard, because NFC does not work on the distances required for such V2X systems.

    Simply take a look at the

  • Feature-List AutoSemantic #1,
  • AutoSemantic extension package, and
  • Swarm Intelligence System (SIS),

    and also note that

  • WLANp is nothing else than a network standard used for our Dedicated (Short-Range) Communications (DSRC) system DediCom besides other wireless network standards coming from mobile devices based on our Ontologic System and Ontoscope, and
  • it does not matter in this context how our Dedicated Communications (DediCom) system is realized in detail.

    In the next future, we will see if the companies will sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well, as indicated throughout the last year.

    23:11 UTC+1
    SOPR #160

    Have we already mentioned that fortunately the company LG Electronics is also on board of our Society for Ontological Performance and Reproduction (SOPR) since some weeks? Yes indeed, CLOi's statement was: Computer says Yes. Maybe we see it in a better mood on an exhibition as well.


    06.January.2019

    01:12, 11:22, and 15:53 UTC+1
    More evidences Honda mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The company Honda thought that it has to present once again a solution based on our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. From a related report of a leading computer technology media company we got the following informations: "Like last year, the robot itself is basically an [All-Terrain Vehicle (]ATV[)] skateboard, to which a variety of specialized components can be attached based on its use. It has four-wheel drive, and it's capable of navigating terrain on its own thanks to sensors and GPS. [...] The automaker will also debut its Honda RaaS (Robotics as a Service) Platform, which Honda promises will speed up development of future robots. It covers common functions like communication and data sharing so that future robots can work together without complex one-off solutions. Think of it as similar to a vehicle platform, but for robot development."

    The so-called skateboard taken alone does not constitute an issue. But we already have the case of the company Trexa, which has presented such a modular vehicle platform after us (see its case in the Investigations::Car #226 of the 11th of February 2010), and Honda merely added at least the as a Service (aaS) platform of our other solutions, specifically of our business units Roboticle and Style of Speed. But stealing from our corporation continously is not a clever trick or even an infinite running gag, but still an unfair business practice.
    Simply take a look at the

  • Car-E and Craft-E platforms with their related autonomous wagons,
  • System Automobile technology with its
    • modular hardware and software platform called Integrated Wheeled Intelligence (IWI),
    • modular application and service platform with ecosystem, including a Mobility as a Service (MaaS) respectively Transport as a Service (TaaS) platform, and
    • overall ecosystem,
  • integration of the Car-E and Craft-E platforms and the System Automobile technology, and
  • other related original and unique concepts and solutions by us.

    Therefore, it has to be understood that Honda has infringed our copyright and other rights of C.S. and our corporation.
    Anyway, Honda has to sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well.

    01:36, 12:10, and 19:13 UTC+1
    Success story of our OntoBot continues

    The success story of our voice-based assistant continues and shows the supremancy of it and its basis, our OntoBot.
    We knew and said it all the time, but now it has been confirmed by a leading computer technology media company with the headline "Alexa's newest tech: the latest products that work with Amazon's AI".

    The same media company also made ambivalent statements in some reports like the following one: "Amazon and Google are expected to make big splashes designed to show the rest of the tech industry that they - not the other guy - have the best platform for operating our smart homes, connected cars and voice-powered offices."
    If C.S. is meant to be the other guy, then we can assure everybody and not only the tech industry that both (designated and already virtual) members of our Society for Ontological Performance and Reproduction (SOPR) together with the other major providers of voice-based systems and virtual assistants, like Microsoft (OntoBot→Cortana), Samsung (OntoBot→Bixby), and Apple (OntoBot→Siri), the rest of the tech industry respectively (designated and already virtual) members of our SOPR and other industries have the best platform as the foundation for their voice-based systems, virtual assistants, and many other technologies, products, and services indeed, which is our iconic masterpiece titled Ontologic System and created by C.S..

    And because some unteachable journalists of CNET have a serious problem with the reality and a similar issue in relation to another voice-based system and virtual assistant we recall once again that the following statement is characteristic for fake news: "Apple's Siri pioneered voice computing on phones [...]."
    The fact is that the company Apple did not pioneer voice computing on (mobile) phones, but merely copied this part of our orignal and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. For getting the facts read the case of Apple in the Investigations::Multimedia of the 5th of October 2011 and 12th of June 2012, and the Investigations::Multimedia, AI and KM of the 30th of April 2013, and also the Clarification of the 28th of December 2011, and the point 7. of the Clarification of the 4th of May 2013.
    For sure, we will get a compensation of damages even if we have noted and alleged the infringements more than 3 years ago, because it was only possible to show Apple's various serious breaches of multiple laws some years later due to the company's unsuccessful strategy of camouflaging its infringements as an ordinary technological progress or an act of fair use.
    Eventually, we own at least the moral rights for our significant pioneering works and the copyright for the designs respectively the Ontologic System Architecture and the Ontoscope Architecture, and hence the copyright, and we do not think that a measure, as described in the Comment of the Day of the 18th of November 2018, will truly make the design of a product or a service of a company more adorable and demanding.


    More evidences Qualcomm and Ford mimicking C.S. and C.S. GmbH07.January.2019

    15:16 UTC+2
    Our OS vision, revolution, domination is heating up

    In fact, it is nearly increadible to see that our iconic masterpiece titled Ontologic System and created by C.S. is creating our New Reality (NR) through the very busy activities of our (designated and already virtual) members of our Society for Ontological Performance and Reproduction (SOPR), as can be seen with the

  • company Amazon that reported the sale of 100 million devices equipped with its voice-based virtual assistant OntoBot→Alexa,
  • company Google that reported the sale of 1 billion devices equipped with its voice-based virtual assistant OntoBot→Assistant, and
  • companies of the fields of white goods as well as smart home and Internet of Things (IoT) devices put our OS into their smart devices of all kinds with their OntoBot and OntoScope variants comprising voice-based virtual assistants, Softbionics (SB) (AI, ML, CV, CAS, etc.), Augmented Reality (AR), cloud computing platforms, and much more.

    One prominent example is the company Whirlpool, that finally found out as well after around 13 years after us that a modern oven has a window in the door, which can be equipped with a transparent display and used with AR systems, applications, and services. ;D

    We will see much more of our solutions integrated in these and other devices and areas.
    And 2019 is only 1 week old and we have not publicated the updated Innovation-Pipeline of Ontonics.

    We recommend that Amazon and Google as well as the other (designated and already virtual) SOPR members increase their accrued liabilities for royalties. :D

    16:22 UTC+1
    Clarification

    *** Proof-reading mode ***
    Our original and unique, iconic work of art titled Ontoscope and created by C.S. features for example

  • a camera
  • a robot, specifically an immobile robot (immobot), and
  • the OntoBot.

    The OntoBot is a component of our Ontologic System that

  • is a Cognitive Agent System (CAS), specifically a
    • Cognitive Agent that Learns and Organizes (CALO), and
    • Personalized Assistant that Learns (PAL),

    and

  • has the basic properties of (mostly) being collaborative.

    This implies that the Ontologic System and the Ontoscope also comprises a

  • collaborative robot (cobot), specifically a collaborative immobot (coimmobot) or immobile collaborative robot (immocobot or imcobot), and
  • collaborative CAS, specifically a collaborative CALO and a collaborative PAL,

    which can be trained or teached for everything in general, specifically for executing arbitrary processes such as

  • monitoring arbitrary locations,
  • capturing arbitrary events,
  • recognizing arbitrary objects, and
  • remembering all of them.

    Related patents seem to be void.
    Furthermore, the membership in our Society for Ontological Performance and Reproduction (SOPR) is mandatory for the reproduction and performanance of parts of our Ontologic System and our Ontosocpe.

    14:59 and 19:03 UTC+1
    More evidences WayRay mimicking C.S. and C.S. GmbH

    *** Work in progress - opinion about technology better wording ***
    The company WayRay constructed merely an Augmented Reality (AR) Head-Up Display (HUD) and implemented an AR Software Development Kit (SDK) that allows third-party developers to add their own developments to windshields. From an online encyclopedia we got the following informations: "The company plans to bring two devices and one development platform:

  • Element - a device that includes a gyroscope, an accelerometer and Bluetooth and GLONASS / GPS modules, it connects to the OBD-II port and collects information about the driver's behavior, speed, fuel consumption and vehicle condition. The collected information is available in digital form. The mobile application provides recommendations for improving driving skills based on this information, and the entertainment feature Autoyoga offers to pass quests.[...]
  • Navion - a navigation system for cars which uses holographic augmented reality technology. Navion consists of a projection system and a visor with an embedded transparent holographic optical element. The device is compact and mounts onto the car's dashboard. The device is gesture-controlled, includes 3G, Bluetooth and GLONASS / GPS modules, a native mobile app and uses its own navigation software.[...]
  • True AR SDK - an augmented reality development framework for third-party developers which allows building AR apps for cars. These are the apps that run on holographic AR displays and complement the native AR interface. AR app content consists of virtual objects seamlessly integrated into the world around the car. The company plans to distribute the developed applications through its own AR marketplace.[...]

    In November 2017 [...] the company was granted other perks from companies represented among panel judges. Those perks included access to Microsoft Azure cloud infrastructure, a new Nvidia Drive PX 2 AI computer, an access to Elektrobit's software network for automated driving and consulting services from Porsche Consulting.
    [...] 2018 [...] WayRay announced its joint pilot project with the German automobile manufacturer Porsche and took home the People's Choice Award and the prize in the AR/VR category.[...] The company also cooperates with a number of automakers on several future vehicles, which will be presented in 2019 and subsequent years."

    That is definitely not a wild windshield technology, as an incompetent journalist claimed.
    We also would like to give the explanation that

  • using a holographic film as projection screen is an older technology and most of the relevant pantents already have expired,
  • every HUD system can be constructed to project on a wider (wind)screen
    holographic displays of this kind were not convincing in shops, officies, and for other uses
    We add another award: Most stupid investment of the last years.

    Instead of marketing it would have been much better if the company's founder would have focused on hardware that is not totally obsolete. We have

  • sensors,
  • connectivity,
  • gesture control, and
  • Augmented Reality (AR),

    even in advanced versions that all already are in the vehicles in accordance with our works of Style of Speed.
    From our point of view the more interesting part is the list of investors including

  • vehicle manufacturers, such as for example Porsche, Hyundai, Alibaba, SAIC Motor, and Rinspeed,
  • chipmakers, such as for example Intel and Nvidia, and
  • other companies, such as for example Samsung→Harman,

    which tells other entities and us a lot and gives us something to think and rethink.

    At least all involved entites, specifically the carmakers, will not get around signing our Society for Ontological Performance and Reproduction (SOPR) agreement and paying royalties.

    As we said, we are really curious if the SOPR survives the CES.

    17:26 and 23:47 UTC+1
    More evidences Qualcomm and Ford mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The manufactures Ford and Qualcomm present a Vehicle-to-everything (V2X) technology based on our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. From a report of a leading computer technology media company we got the following informations: "One of the most important visions for an idyllic, utopian vision for a future for transportation has been vehicle-to-everything communications. This V2X technology would allow for cars to talk to each other and to everything around them, helping to avoid collisions and congestion automatically. The problem is, in order for this to work all cars really need to be speaking the same language, and the development of those industry-wide standards has been dragging on for decades. [...] Ford has announced it's moving its own way. After demonstrating with Qualcomm in early 2018, Ford will begin deploying V2X communications in all its new cars starting in 2022.
    Ford is calling this C-V2X, and that "C" is a very important differentiator. It stands for "cellular", pointing to this tech being built on the back of existing mobile networks that power our cellphones. This actually had been a huge sticking point for the industry-wide adoption of V2X, because much of the original work was developing a proprietary wireless standard called dedicated short-range communications, or DSRC.
    Under development for decades, DSRC creates a short-range, point-to-point network that enables cars to talk to nearby objects. However, now that 4G is widespread, and with 5G coming soon, many in the industry believed it was time to ditch DSRC and go with cellular. That's exactly what Ford is doing here.
    In announcing the service, Executive Director of Ford's Connected Vehicle Platform Don Butler said that this C-V2X system is meant to "complement" the onboard sensors that enable the company's autonomous cars to function. "While these vehicles will be fully capable of operating without C-V2X," Butler said, "the technology could help them create more comprehensive maps of the world that lies beyond the view of lidar, radar and cameras."
    The big question, though, is whether the rest will follow. The full potential of V2X will only be realized when the entire industry is onboard, but with Audi also indicating it will support the C-V2X implementation provided by Qualcomm, we might be seeing the beginning of some proper - and long-needed - momentum."
    In another report of the same media company publicated as a reaction on our earlier note we got the following additional informations: "The chipmaker discussed its efforts to deploy C-V2X technology in cars and roadside infrastructure as a new way to allow drivers to - in effect - see around blind corners, stop ahead of emergencies a mile down the road and brake before hitting a distracted pedestrian. The technology uses existing cellular networks to allow cars to essentially communicate with each other and other objects along the road. That information is then fed as warnings to drivers to help them avoid collisions and other dangers. [...]
    However, making C-V2X a reality will likely require years of work to bring the system to cars and infrastructure. [...]
    Also, C-V2X offers a different way to create this new communication system that may move away from a similar technology. That system is called dedicated short-range communications, or DSRC, which has been under development for decades.
    As part of Qualcomm's presentation [...], Ford said it plans to deploy C-V2X communications in all its new cars starting in 2022, offering a big boost for C-V2X's future as a new industry standard ahead of DSRC."

    First of all, it is a proven fact that it is not Ford's own way at all but merely the realization of our way or solution presented many years ago, that created the proper and long-needed momentum and is happening now.
    Simply read related notes about other companies such as the notes

  • More evidences Volkswagen and Siemens mimicking C.S. and C.S. GmbH, and
  • More evidences Nissan mimicking C.S. and C.S. GmbH

    of the 5th of January 2019 to find out that we did everything with our Ontologic System and its AutoSemantic extension package as well as the Hyper Connectivity suite of Style of Speed that Qualcomm and Ford presents at a consumer electronics exhibition

    and note the little detail that we wrote "Dedicated (Short-Range) Communications (DSRC) system DediCom" for emphasizing that our DediCom system

  • works with or without short-range communications or network standards, and
  • complements and improves the original DSRC with other wireless network standards, specifically the ones used for mobile devices such as for example mobile phones, smartphones, and handheld Ontoscopes, of course, including cellular network standards, also of course.

    In fact, we have very carefully authored our publications.

    In the next future, we will see if the companies will sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well, as indicated throughout the last year.

    Yes, this is very true.

    Btw.: We have the impression somehow that the C stands for C.S., as we also had with the model designation C Max.

    18:05 and 24:51 UTC+1
    More evidences Flir mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The company Flir continued with presenting solutions based on our original and unique, iconic work of art titled Ontoscope and created by C.S.. From a report of a leading computer technology media company we got the following informations: "When people talk about sensors for self-driving cars, they usually focus on the ones most often talked about: lidar, ultrasonic (parking) sensors and good old-fashioned cameras. Flir came to CES 2019 to tell everyone that we're still missing something.
    Flir [...] provide a simple solution for adding thermal recognition to an existing autonomous vehicle platform.
    [...]
    According to Flir, it's all about adding an extra layer of mapping to pick up things that other parts of a sensor suite might miss. The company says its infrared camera can detect objects in conditions that would befuddle other types of sensors, like when there's fog or excessive glare from the sun. In theory, the system could detect a car solely by picking up on the heat signatures generated from the friction between its tires and the road."

    Simply take a look at the

  • Ontoscope, specifically the points
    • "applications of more sensors and methodes of the subjects Artificial Intelligence and Robotics, like object recognition and object tracing",
    • "applications in the fields of science, vehicles of all kinds as well as robotics",
    • "range of sensors of our cameras and Ontoscopes, which in general is much wider than with usual cameras and comprises all kinds of sensors", and
    • "foundation[al] concept to which belong[s] an OS, the fields of Artificial Intelligence (AI) and robotics",

    and

  • Active Cam of our business unit Style of Speed.

    Obiviously, it is time to talk about royalties, because our works are not for free.

    18:08, 20:30, and 25:33 UTC+1
    More evidences Audi and Disney mimicking C.S. and C.S. GmbH

    *** Work in progress ***
    The manufacture Audi and the ... Disney presented Holoride. From a report we got the following informations: "It's like Ready Player One - but in the back of your Audi.
    [...] but this time I was fully immersed in a space battle alongside Rocket - you know, the raccoon from Guardians of The Galaxy - and Iron Man. As I sat in the back seat [...] wearing a[...] virtual reality headset, Rocket encouraged me to "shoot" asteroids and rival drones as I "flew" through an outer-space environment.
    [...]
    [...] Audi and Disney's promised new media format, which aims to bring virtual reality to passengers in cars. The VR experience is intended to match, visually, what the passengers feel as they ride: If the car turns, accelerates or brakes, the VR environment will do the same thing. And the "experience" - whether it's a game or a movie or something else - will be automatically tailored to the length and movements of your drive route.
    Specifically, Holoride will offer something called "elastic content," automatically generated to suit each journey. [...] every one of Holoride's experiences would automatically match up to the length of a route programmed in the car's navigation system.
    In addition, the experience would be tailored to the drive route: In one demo, passengers "see" a cartoon-like, brightly colored town with intersections that match up to the real-world intersections the car is driving past. In another mock-up, users "fly" through a prehistoric landscape and turn left or right, soaring over dinosaurs, as the real-world car steers along the road.
    In Audi's own words: "If the car turns a tight corner, the player curves around an opposing spaceship in virtual reality. If the [car] accelerates, the ship in the experience does the same."
    [...]
    The VR demo really did bring me out of the world of riding in a car: Sure, I could feel the [car] moving around and accelerating and so on, but I found myself so immersed in the game that there was no real sense of what was happening.
    [...]
    "Every street pattern turns into a canvas" for content creators, [...] while "every back seat turns into a thrilling ride."
    Because the twists and turns and elevation changes of a preplanned navigation route are known, thanks to the car's built-in map data [...] Holoride's software can help create a virtual world that matches the real one. The game engine might be told not to place a digital obstacle near a highway exit, for instance, or might be shown when to make the game whip around to the left to match up with a hairpin bend in the real world.
    [...]
    Because visual cues match up with the car's real-world movements, you're less likely to get that sinking feeling you might feel when, say, looking down to read text messages or Facebook updates in the car.
    [...]
    He also imagines experiences that let passengers time travel as they ride through a modern-day city while seeing how buildings looked in years past.
    "I can say perhaps let's go minus 2,000 years [in Rome], [... o]r you go to New York City in the 1920s."
    [...]
    Audi began working on Holoride technology about four years ago and brought Disney on board about 18 months ago. But the goal long-term is for it to be an open platform: anyone could produce content and it could work in any car. Though he's cagey with details, Wollny seems to hint at a type of app store-like model where a user might buy experiences - movies, games and so on - for their Holoride device. Then the car would connect to your virtual reality headset with a wireless connection to provide information about your route and the car's motion.
    Wollny says Holoride will launch an SDK (software development kit) for others to experiment with [...]."

    It is this creation of a virtual world that matches the real one and the connection of the reality and virtuality through the systems of a vehicle. Also, the Ontologically Anthropocentric Sensory Immersive Simulation (OASIS) of Ready Player One is an essential part of our OS.
    Simply take a look at the

  • Caliber/Calibre,
  • Head-Mounted Display@Car (HMD@Car),
  • Investigations::Multimedia of the 14th of January 2018,
  • Clarification of the 20th of February 2018, and
  • note More evidences Hyundai mimicking C.S. and C.S. GmbH of the 5th of January 2019

    to find out that we have created and described everything years before as totally new, unforeseeable and unexpected, original and unique, iconic visions and works of art.

    In the next future, we will see if the companies will sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well, as indicated throughout the last year.

    Btw.: Who are the stakeholders of Holoride? Audi holds only a minority stake.
    Also keep in mind that

  • licensing our works of art must be accredited by our SOPR, which excludes Free and Open Source Hardware and Software (FOSHS) licenses, and
  • if Holoride provides a meta-technology including a meta-system or a meta-platform, or a meta-service, then a fee is due for the related reproductio or a share of 10% is due for the related performance to avoid labelling in a way as suggested in the Comment of the Day of the 18th of November 2018.

    Furthermore, our other ways of protection have to be viewed as complements to the copyright protection proving for example the date of publications.


    08.January.2019

    00:18 and 01:45 UTC+1
    More evidences Huawei, Vodafone, Bosch, and Audi mimicking C.S. and C.S. GmbH

    The companies Huawei, Vodafone, and Bosch as members of a collaboration with the marque Volkswagen→Audi are also working on the Cellular Vehicle-to-everything (C-V2X) technology and making our vision a reality in this way. From a press release of the company Huawei we got the following informations: "At the Mobile World Congress 2017 (MWC 2017) in Barcelona, Huawei and Vodafone, with the support of Audi, demonstrate for the first time in Europe the use of cellular technology to connect cars to each other, to people, and to roadside infrastructure enhancing safety and delivering a better driving experience. Using a new technology called Cellular V2X (C-V2X) the live demonstration takes place in front of invited guests [...].
    As part of the 4G evolution towards 5G, the new C-V2X technology enables rapid exchange of information between vehicles, other road users and infrastructure promising to bring about a transformational change to driving, vehicle safety, traffic management and road efficiency. This latest development follows the successful live trial by Huawei, Vodafone and Bosch of a 5.9 GHz C-V2X connection purely between vehicles on the A9 motorway in Germany in February 2017."

    Simply read the notes

  • More evidences Volkswagen and Siemens mimicking C.S. and C.S. GmbH of the 5th of January 2019 and
  • More evidences Qualcomm and Ford mimicking C.S. and C.S. GmbH of the 7th of January 2019.

    A journalist reacted on our explanations: "The cellular part of C-V2X refers to the fact that it primarily operates on cellular networks, and it'll eventually be compatible with 5G once that takes over everything we know and love. But in the event cellular networks are hard to come by [...], it can also function on the 5.9-gigahertz spectrum, aka [IEEE] 802.11p, [aka WLANp,] aka dedicated short-range communications (DSRC)."
    Suddenly, nothing is a pie-in-the-sky and decades away anymore, and DSRC is not ditched at all but even complemented and improved with our Dedicated Communications (DediCom) system, and adapted and realized by the industries and (designated and already virtual) members of our Society for Ontological Performance and Reproduction (SOPR).

    We also note that once again C-V2X is designated as a new technology, which we understand as further evidence that our Ontologic System with its AutoSemantic extension package is its true origin.
    In the next future, we will see if the companies will sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well, as indicated throughout the last year.

    Yes, our New Reality (NR) is very true.

    03:41, 04:05, 21:09, and 21:52 UTC+1
    100% OS = 100% success

    "Only 4 percent of US adults accessing voice assistants on a smartphone use Bixby, according to a survey [...]. That compares to 44 percent for Siri, 30 percent for Google Assistant and 17 percent for Alexa."
    But 100 percent of users worldwide accessing voice assistants use OntoBot.

    In this relation, we found the following informations:

  • "[OntoBot→]Bixby started as a smarter way to use your Galaxy phone. Today, it is evolving to become a scalable, open [Artificial Intelligence (]AI[)] platform that will support more and more devices."
  • "The new-for-2019 Bixby 2.0 promises improved responsiveness, smarter replies and a more conversational approach than the old version [...]." <.i>The company Google will also be joining the ecosystem of Bixby with products services like Gmail, Google Play, YouTube, and Google Maps becoming compatible with Bixby in the near future.

    We have also seen several so-called Artificial Intelligence (AI) platforms with various orientations and emphases based on our Ontologic System (OS) from the companies IBM, Microsoft, Google, Amazon, Samsung, and LG Electronics. We guess there will be some more in the world, specifically from companies based in the P.R.China.

    Next up, OntoScope.

    Eventually, these developments also made our OS the accepted industry standard in the Information and Communication Technology (ICT) industry sector as well besides the engineering industry sector.

    21:38, 21:55, and 23:17 UTC+1
    More evidences Continental mimicking C.S. and C.S. GmbH

    *** Work in progress - better wording of section about V2X ***
    The company Continental continued with presenting solutions based on our based on our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. From a report we got the following informations: "Its "trained parking" system is especially neat: The vehicle will record and store inputs for a parking maneuver, such as a tricky corner, and the car can then act out those maneuvers autonomously later on using an app to activate.
    [...]
    Sensors are a big part of next-gen vehicle tech, whether in doors or monitoring the road itself. Continental will showcase its whole portfolio of sensors, including cameras that are capable of reading splash patterns on the road and determining whether or not a vehicle is about to hydroplane."

    Obviously, we have here

  • an essential feature of our Ontologic System (OS) with the Cognitive Agent System (CAS), specifically a
    • Cognitive Agent that Learns and Organizes (CALO), and
    • Personalized Assistant that Learns (PAL),

    and

  • the Active Sensors of our business unit Style of Speed.

    In addition, we saw a graphic showing pathes of communications labelled GSM / UMTS / LTE / 5G, C-V2X, V2I: DSRC / C-V2X, and V2V: DSRC / C-V2X, which is a basic part of our Dedicated Communications (DediCom) system, which includes V2X: DSRC / C-V2X.
    Once again, the

  • mobile phones or cell phones, which work in a cellular network, are clearly referenced on the website of our OS OntoLinux
  • Ontoscope operated by Ontologic System Components (OSC) respectively in our OS is related to mobile phones or cell phones,
  • field of Internet of Things (IoT) is listed on the website of our OS OntoLinux in the
  • other network standards are listed
  • in the Feature-List #1 "Wireles network" and "MultiWLAN and Full Wi-Fi", and
  • as part of the AutoSemantic extension package.

    and

  • Dedicated Communications (DediCom) system of us
    • is compatible with Near-Field Communication (NFC) and Wireless Local Area Network (WLAN) systems, that implies DediCom must also work on the basis of Wireless Wide Area Network (WWAN) systems including systems based on cellular network technology, and
    • is now a standard of the
      • automotive industry standard as well as
      • the field of Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES).

    As far as we do know and remember, our

  • utilization of cellular technology was not envisioned at all for technologies including systems, products including applications, and services like
    • Dedicated Short-Range Communications (DSRC) technology,
    • Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications respectively Vehicle-to-everything (V2X) communications technology,
    • Hyper Connectivity suite of Style of Speed

    at the time we presented our visions and

  • our architecture comprising our DediCom system with medium-range and long-range communications and adding cellular technology in this way was a very progressive and even totally new, unforeseeable and unexpected, as well as ridiculous act.
    This provides a causal link with OS, which demands the membership in our Society for Ontological Performance and Reproduction (SOPR).
    In the next future, we will see if the company will sign the agreement with our Society for Ontological Performance and Reproduction (SOPR), and pay royalties according to the License Model (LM) of our SOPR as well, as indicated throughout the last year.

    By the way, AutoSemantic stands for Automatic Semantic and not Automobile Semantic, which is AutoSemantic::Car. Actually, we have forgotten the reason for choosing this specific designation and only remember that we did not want a designation starting with the term ontologic.


    09.January.2019
    Clarification

    Our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S. feature for example

  • Distributed Systems (DSs), specifically
    • cloud computing,
  • the integrating Ontologic System Architecture (OSA) with an Emotion Architecture (EA),
  • the OntoBot component,
  • the OntoScope component, and
  • a camera.

    The OntoBot has as foundations the fields of

  • SoftBionics (SB), specifically
    • Artificial Intelligence (AI),
    • Machine Learning (ML),
    • Computer Vision (CV),
    • Simultaneous Localization And Mapping (SLAM) system,
    • Cognitive Vision (CogV),
    • Cognitive Agent System (CAS),
    • Cognitive Computing (CogC),
    • Emotional Intelligence (EI),
    • Multi-Agent System (MAS), and
    • Swarm Intelligence (SI) or Swarm Computing (SC),

    The OntoScope has as foundations the fields of

  • 3D environments, specifically
    • 3D environments with 3D character animation such as for example
      • Roboverse,
  • Multimodal User Interface (MUI) technologies, specifically
    • kinetic user interface,

    and is utilized for our Sp@ce environments for example.

    This implies that the Ontologic System and the Ontoscope also comprise an

  • SB engine, specifically
    • CV engine and
    • CogV engine,

    which can be trained or teached for everything in general, specifically for executing arbitrary processes such as

  • monitoring arbitrary locations,
  • capturing arbitrary events,
  • recognizing arbitrary objects, specifically
    • skeletal informations and
    • faces,
  • interpreting or understanding body languages, and
  • remembering all of them,

    which again can be providce as a Service (aaS) by a cloud computing platform.

    Related patents issued after the year 2006 seem to be void.
    Furthermore, the membership in our Society for Ontological Performance and Reproduction (SOPR) is mandatory for the reproduction and performanance of parts of our Ontologic System and our Ontosocpe.

    Btw.: On the one hand, we have a clue why an implementation of such a vision engine is called futuristic technology. But on the other hand, we have no clue why such a vision engine was named Kepler and can only guess that it must be related to the Picture of the Day of the 7th of March 2009, Pictures of the Day of the 5th of March 2012, and our Ontoscope, and also add that it must be called for example C.S. Engine or S********* Engine for avoiding infringements of various rights of C.S. and our corporation.

    02:43, 12:18, and 20: UTC+1
    More evidences IBM mimicking C.S. and C.S. GmbH

    *** Work in progress - better wording, some few missing informations ***
    The company IBM presented some of its works that are related or even based on our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S.. From a first report we got the following informations: "IBM [...] unveiled on Tuesday a new global weather forecasting system designed to provide more accurate and timely forecasts around the world. The IBM Global High-Resolution Atmospheric Forecasting System (GRAF), created with The Weather Company, a Big Blue subsidiary, updates every hour and provides forecasts for smaller, more specific areas than are currently covered. GRAF uses IBM supercomputers to analyze crowdsourced and in-flight data from millions of sensors around the world.
    [...]
    In many parts of the world, weather forecasts cover 12- to 15-kilometer expanses of land, meaning some weather phenomena might be missed. Traditional weather models also update every 6 to 12 hours. GRAF will address both issues, provide forecasts for smaller 3-square-kilometer areas and update hourly, IBM said.
    GRAF's will also tap into data from aircraft sensors, IBM said, giving the system access to wind speed and temperature data in parts of the world that lack specialized weather equipment. People can also opt to share pressure readings from barometers in their phones, which will help improve the forecasts."
    "IBM also used the expo to unveil its Q System One, which it said is the world's first integrated quantum computing system for both scientific and commercial use.
    [...]
    In addition to unveiling the IBM Q System One, the company said it plans to open its first IBM Q Quantum Computation Center for commercial clients in Poughkeepsie, New York, this year. The center will have advanced cloud-based quantum computing systems [...].
    [...] research labs [...] will work with IBM scientists, engineers and consultants to explore how quantum computing can be used for specific industries. They'll also look for ways quantum computers can more efficiently solve real-world problems, such as optimizing a country's power grid or advancing scientific understanding of the universe."
    "IBM received a record 9,100 patents in 2018, the company said. It had the most artificial intelligence, cloud computing, security and quantum computing-related patent grants in the industry, it noted. Last year, IBM inventors were granted 1,600 AI patents."
    ""

    From a second report we got the following informations: "[...] IBM's hourly weather reports will cover entire Earth
    A weather forecasting system that can provide hourly updates for any location on the planet has been announced by technology giant IBM. [...] But IBM's new tool provides reports down to more specific, 3km-wide areas. The company says it can even predict individual thunderstorms.
    [...]
    [...] the Global High-Resolution Atmospheric Forecasting System (Graf) had been designed to gather data from a wide variety of sensors - including millions of smartphones equipped with atmospheric pressure sensors. Tracking changes in pressure is crucial in meteorology, the study of weather processes and forecasting. But besides this crowdsourced data from members of the public, Graf will also analyse information from thousands of commercial flights. Instruments on planes measure weather conditions and phenomena such as turbulence."

    Somehow, the global weather forecasting system GRAF is related to our Ontologic System and our Superstructure and Weather Control System (WCS) projects of our business unit Ontonics. See for example the

  • issue Superstructure #1 of the 29th of October 2016,
  • issue Superstructure #13 of the 5th of August 2017, and
  • issue Superstructure #15 of the 8th of August 2017 {correct issue?},
  • OntoLix and OntoLinux Website update of the 21st of August 2017 and
  • OntoLix and OntoLinux Further steps of the 23rd of September 2017,
  • section Earth Simulation/Virtual Globe of the webpage Links to Software of the website of OntoLinux, specifically the links to

    and also

  • Sensor Swarm Robot project and
  • section Exotic Operating System, specifically the link to
    • ANTS,

    and note that

  • IBM copied the 3D structure through crowdsourced in-flight data provided by aircraft sensors and
  • this kind of crowdsourcing of environmental data is also promoted by entities that are supporting the so-called Distributed Web or Decentralized Web (DWeb).

    Furthermore, the company has suddenly a center with cloud-based quantum computing systems respectively a cloud computing platform that provides Quantum Computing as a Service (QCaaS) as well, specifically as part of Problem Solving Environments (PSEs).
    Take a look at the

  • chapter 6 Ausblick==Outlook of The Proposal,
  • section Basic Properties of the webpage Overview of the website of OntoLinux, specifically the list point Problem Solving Enironment (PSE), and
  • section Network Technology of the webpage Links to Software of the website of OntoLinux, specifically the links to
    • Grid Computing Info Centre (GRID Infoware) and
    • OpenStack.
  • section Quantum computation of the webpage Terms of the 21st Century of the website of OntoLinux,

  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs), including
    • blockchain platforms and
    • distributed ledgers,
  • secure and safe data stores, specifically
    • ledgers,
  • validated and verified Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES),
  • digital currencies like our Quantum Coin or simply Qoin, and
  • some projects like for example
    • teleportation and
    • Weather Control (WC),

    are parts of the infrastructure of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), as well as our Society for Ontological Performance and Reproduction (SOPR) and hence are taboo respectively require commissioning and provisioning, as well as operation under the control of our SOPR, though IBM is in a very good position for collaborating with us for their realization besides some other elements of the infrastructure and platforms of our ON, OW, and OV, as well as our SOPR.

    {Quantum Computing (QC) belongs to the 21st Century items, that are kept exclusive.} If IBM provides its QCaaS as

  • a technology including a system or a platform, or a service, then a share of 5% and
  • a meta-technology including a meta-system or a meta-platform, or a meta-service, then a share of 10%

    of the overall revenue generated with the performance of Ontologic Applications and Ontologic Services (OAOS) is due. If IBM provides performances of our OAOS for free, then we will estimate the overall revenue.

    In relation to the patents in the fields of AI, cloud computing, and Quantum Computing (QC) we would like to mention that

  • 30% are void because they are based on prior art or on Teaching, Suggestion, and Motivation (TSM) in prior art,
  • 30% are nonsense or have no relevance in practice,
  • 30% are relatively special and have to prove their relevance in practice, and
  • 10% are really interesting.

    and we integrated them with Evoos with QC and OS with QC, AI with QC respectively SB with QC, AI with cloud computing respectively SB with cloud computing, and QC with cloud computing, as well as with Service-Oriented technologies (SOx), FTRTDSs, New Reality (NR) and much more.

    Furthermore, if a patent limits the personal rights of C.S., specifically in relation to the

  • freedom of artistical expression and its exclusive right for monetizing works of art,
  • OS, and
  • other visionary projects,

    then there might be a legal issue.
    We observed the patent strategies of companies of the ICT industry sector and one measure is no implementation of them as FOSHS in our ON, OW, and OV.
    We added

  • a rule in the issue SOPR #116 of the 7th of May 2018 for handling plagiarisms and
  • examples in the issue ... of the ... for a related case.
    We will add a rule to the Articles of Association (AoA) of our SOPR so that suing C.S. or our SOPR respectively our corporation on the basis of a patent results in the revocation of membership in the SOPR.

    By the way: Everybody is so clever like IBM, but what we see and document in its case since around 18 years now remains unfair business practice, specifically abuse of market power and illegal agreements and orchestrations and choreography with other entities, as well as other infringments of the rights of C.S. and our corporation.

    19:43 and 21:59 UTC+1
    More evidences Intel and Warner Bros. mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    The companies Intel and Warner Bros. presented an In-Vehicle Infotainment and Communication (IVIC) system based on our Space@Car In-Vehicle Augmented Reality Environment (IVARE). From a report we got the following informations: "[Image caption:] Entertainment might get a whole lot more immersive in future autonomous vehicles.
    [...]
    In a[n automobile] outfitted with a flotilla of displays, projectors and sensors, Intel is showing what another aspect of augmented reality might look like in an autonomous car, like those developed by Intel's Mobileye.
    In the demonstration, passengers take a virtual trip into - and through - Gotham City. Of course, in reality they'd be driving along any road anywhere, but thanks to all the tech applied to the car (and the Warner Bros. license, of course[, and the agreement with our Society for Ontological Performance and Reproduction (SOPR), SOPR license, and SOPR End-User License Agreement (EULA), also of course]), things look rather more fanciful.
    It's a compelling idea, that of replacing reality with something more interesting [...].
    But this isn't just about entertainment, it's also another way for autonomous cars to communicate with passengers. In this case, its Alfred who tells occupants about road work and detours, all spun within the Batman realm in this case, but still real-world information.
    Whether we'll ever see something like this in reality remains to be seen, but you can be sure that content companies will be clamoring to get their wares in front of bored commuters of the future. Partnerships like this, and indeed the one between Audi and Disney, are just the beginning."

    Also keep in mind that the New Mobility (NM) with autonomous cars and the technology of Intel's subsidiary Mobileye are based on our Ontologic System and our Ontoscope to a significant if not to say very large extent as well.

    See the

  • webpage Environment of the website of Style of Speed, which clearly says that "we have developed several visual and multimodal digital interior and exterior environments for vehicles, that are powered by OntoLix and OntoLinux" and lists our solutions
  • note More evidences Hyundai mimicking C.S. and C.S. GmbH of the 5th of January 2019 about Kia with Emotional Intelligence (EI), and
  • note More evidences Audi and Disney mimicking C.S. and C.S. GmbH of the 7th of January 2019 about Holoride.

    Indeed, we have her one more nice example of our

  • New Reality (NR) {or Ontologic uniVerse (OV)?} in general, which connects, merges, unites, and unifies reality and virtuality, and
  • New Mobility (NM), which comprises our Mixed Reality Environments (MRE) based on our Sp@ce technology.

    We also have the

  • Distributed Systems (DSs) including
    • Peer-to-Peer (P2P) computing
    • cloud computing, and
    • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs),
  • AutoSemantic with Dedicated Communications (DediCom) system integrating Dedicated Short-Range Communications (DSRC) technology and Cellular Vehicle-to-everything (C-V2X) technology for Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications respectively Vehicle-to-everything (V2X) communications technology,
  • SoftBionics (SB) including
    • Artificial Intelligence (AI),
    • Machine Learning (ML),
    • Computer Vision (CV),
    • Cognitive Vision (CogV),
    • Cognitive Software Agent System (CSAS),
    • Cognitive Computing (CogC),
    • Emotional Intelligence (EI),
  • Service-Oriented technologies (SOx),
  • Hyper Connectivity,
  • and much more,

    as well.
    We are sure that both In-Vehicle Augmented Reality Environments (IVAREs) and In-Vehicle Virtual Reality Environments (IVVREs) respectively In-Vehicle Mixed Reality Environments (IVMREs) as well as other New Reality Environments (NREs) will be installed in future vehicles of all kinds.

    Unforeseeable and unexpected, original and unique, or simply iconic.


    10.January.2019
    Ontonics Further steps
    We developed a new feature of our media platform that replaced the press in our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV). When you see it then you will not believe it on the one side but love it instantly on the other side.
    In fact, it is just another masterpiece created by C.S..

    SOPR #161
    The topic of this issue is the certainty that a constructive interoperability and convergence benefitting everybody is only possible with our Society for Ontological Performance and Reproduction (SOPR).

    But more and more hardware and software manufacturers and some clever (not really) plagiarists are still trying to realize interoperability and convergence without our SOPR. That is comprehensible from the point of view of competition but not acceptable from the point of view of us.
    For example,

  • in the past we discussed the illegal connection of Amazon's Alexa and Microsoft's Cortana as some kind of a Multi-Agent System (MAS), which was also integrated with the cloud computing platforms of both companies, and
  • in the note More evidences others mimicking C.S. and C.S. GmbH of the 5th of January 2019 we discussed an illegal meta virtual assistant.

    In this relation, we would like to clarify that we did not prohibit that

  • multiple virtual assistants can be installed on one device, and
  • a user can toggle on-the-fly between them by using their individual, specific features, skills, and procedures,

    and made no provisions about

  • how a device is working, and controlled and used by an individual virtual assistant, and
  • how two or more virtual assistants are used side by side and working concurrently in for example a whole-home system.

    But that is already the red line and no integration is allowed even not things are not allowed like for example

  • "Alexa, ask Google Assistant to ask Bixby to ask Cortana ..." or "Hey, Google, ask Alexa to ask Siri to ask Watson ..." for simulating an integration with the goal to circumvent the provision for handling interoperability and convergence by our SOPR, or
  • similar tricks like for example that meta virtual assistant working without or with a cloud computing based system, which by the way would be a part of our SOPR infrastructure anyway.

    But we would also like to recall that we will not make any further concessions. This includes the interoperability and the convergence of different

  • voice-based systems and
  • virtual assistants

    that will be managed by our SOPR and realized with its infrastructure, platforms, applications, and services.

    Once again, the reasons for this and other provisions are that we provide for example

  • openess,
  • fairness,
  • neutrality,
  • accountability,
  • transparency, and
  • interoperability, and also
  • our Ontologic Economic System (OES)

    for everybody participating.
    Therefore, another superordinate or higher alliance, ecosystem, or whatsoever is not needed.

    Style of Speed Website update
    We noticed, that the description of some features of our multimedia environments are still publicated only on the webpage of our Head-Mounted Display@Car (HMD@Car), which we did to make copying it a little more difficult for the industries.
    But this could have been found out by reading carefully said webpage, which is connected with Space@Car that works without or with special glasses or HMDs, and
    In fact, the key statements

  • "The [In-Vehicle Infotainment and Communication (]IVIC[)] system HMD@Car consists of: [...] Space@Car [...]." and
  • "The system works inside and outside of a vehicle and makes possible all kinds of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) multimedia applications like they are shown on the webpage of our Space@Car technology for example."

    make crystal clear that also those envrionments are included in the system, which do not require special glasses or HMDs because the webpage of Space@Car does not show head-worn devices at all. :D


    11.January.2019
    Ontonics Further steps
    We started a new project and created a related device, which will start very much discussions in the related fields, as usual with our works of art.

    Ontoscope Further steps
    We continued the work on one of our Head-Mounted Display (HMD) models, which has some very interesting features and allows incredible fascinating experiences, that will trigger much discussions, as usual with our creations in this special field of Mixed Reality (MR) in particular.
    We are really curious about the reaction of the public.

    Preliminary investigation of VR entertainment attractions started
    We have started our investigations to find out if entertainment attractions such as for example

  • virtual reality roller coaster (since 2014),
  • other carnival attractions (e.g. rides),
  • location-based Virtual Reality (VR) experiences (e.g. Disney - The Void),
  • etc.

    are also based on our Caliber/Calibre and included in our Ontologic uniVerse. In fact, we have CAVEs and Mixed Reality Environments (MREs) since many years but our work of art titled Ontologic System and created by C.S. as well as related environments including our Sp@ce technology have added a new quantity and quality, which could be very well unforeseeable and unexpected. Holoride and other In-Vehicle Mixed Reality Environments (IVMREs) are clear but where has the white line to be drawn.


    12.January.2019

    -01:50, 01:10, and 12:21 UTC+1
    More evidences LG Electronics mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    As we already mentioned in the past, the company LG Electronics has also presented an Artificial Intelligence (AI) platform. From a press release of LG Electronics we got the following informations: "LG Electronics' Promise of AI for an Even Better Life Delivered at CES 2019 Keynote
    [...] LG's CTO outlined how the three key pillars of artificial intelligence - Evolve, Connect and Open - could deliver a robust AI ecosystem with diverse solutions for the real world.
    A leading proponent of consumer-focused AI technology, LG has continued to take a leadership role in advocating the beneficial role of AI in consumers' lives. The keynote featured LG's AI technologies implemented in daily-life scenarios to give the audience [...] a better understanding of how the company is working on transforming tomorrow. The LG CLOi GuideBot shared the spotlight with Dr. Park during the address [...].
    "Is technology making your life better?" Dr. Park began his keynote by asking the audience. "Over the past 100 years, household appliances such as refrigerators, washing machines and vacuum cleaners have reduced time spent on housework by around 75 per cent, but the amount of cognitive labor involved has significantly increased," Dr. Park explained, "The answer lies in AI - but only if we can achieve true intelligence."
    Since its launch in 2017, the company's AI brand LG ThinQ has seen its portfolio grow rapidly to include air conditioners, washing machines, TVs, smartphones and robot vacuum cleaners. Dr. Park presented LG's latest innovations in these appliances which leveraged the power of AI: the world's most advanced AI chip for home appliances, a washing machine with reinforced learning, and "self-healing" machines that can detect and fix malfunctions automatically without interrupting operation.
    "But I want to talk about more than just improvements," Dr. Park continued. "Our ambition is to go way beyond LG's current role as a leading manufacturer of consumer electronics and to become a lifestyle innovator that serves a truly intelligent way of living." Proclaiming LG's vision for the age of artificial intelligence, Dr. Park explained how the three key ideas - Evolve, Connect and Open - will empower AI technology to transform every aspect of daily life. LG ThinQ products will evolve over time by learning about the user, connect seamlessly with customers' lives and open up an ecosystem of innovation made stronger by partnerships and cooperation, ultimately offering "a new and daring definition of better life."
    Dr. Park highlighted the importance of evolving intelligence in consumer electronics. For AI devices to go beyond simple voice recognition and automated task execution, they must be able to understand the purpose and intention behind each command. Such contextual understanding requires AI to evolve through accumulating interaction with the user.
    LG is also extending its unparalleled consumer insight from home to the road with its innovative offerings for a new in-car experience. In the advent of an autonomous driving revolution, LG has set out to change and expand the very definition of vehicles from a means of transport to mobile space. LG's AI-enabled cabin solution will help users make the fullest use of their time saved from not driving, turning their vehicles into a conference room, movie theatre or even personal shopping boutique.
    "Building this new in-car experience requires a wide range of different solutions in both hardware and software ... which is why we need OPEN collaboration," said Dr. Park, illustrating LG's continued efforts to facilitate a culture of open innovation. He introduced LG's collaboration with the leading seat manufacturer Adient to develop smart seats for a more personalized in-car experience. Also announced was LG's new plan for its operating platform [system] webOS which has been open sourced since March 2018. "From this year, we'll be adding to webOS open access to LG's proprietary AI platform for developers all around the world," said Dr. Park.
    Dr. Park then presented LG's ambition to unlock the potential of AI technology a much larger scale by connecting hitherto individual units into intelligent systems. LG's Robot Service Delivery Platform (RSDP) will systematically coordinate what multiple robots see, hear and learn to transform how we manage our work and our environment. AI-based smart grid will allow us to radically improve the efficiency of our energy ecosystem, from production and storage to consumption. Intelligent signage will turn the physical elements of space such as walls, signboards and even floors, into an active, intelligent part of environment.
    [...]
    "LG is a global powerhouse at the forefront of the AI revolution, which will impact nearly every major industry from technology to healthcare, agriculture, transportation, engineering and beyond," said Gary Shapiro, president and CEO, CTA. "We were thrilled to have LG talk about 'AI for an Even Better Life' as its first keynote in CES history."
    Visitors to the LG ThinQ Zone in booth #11100 of the Las Vegas Convention Center this week will be able to experience LG's collaborative technology and evolving AI firsthand."

    What should we say?

    From a first report of a media company we got the following informations: "At [a consumer electronics exhibition], the company pushed its ThinQ AI platform forward by allowing it to give you personalized recommendations based on your usage pattern. [...]
    Q: What the heck [is] LG ThinQ?
    [LG E]: LG ThinQ is our artificial intelligence platform that allows not just individual devices to react to commands, but also takes user data, lifestyle data, and pulls them together to make consumers' lives easier and more enjoyable.
    [Q:] So how does this work?
    [LG E:] LG's vision for AI takes what we call lifestyle data, and that's going to allow us to look at things like how our products are used in the home. For example, what television shows you watch, how often you do laundry, what type of clothes do you wash, how often do you clean your home. All those are combined with both internal and [external or] outside environmental factors to learn what users' specific circumstances really are, and from there, offer proactive [recommendations or] suggestions based on that user's lifestyle.
    [Q:] Does that mean you're collecting our data?
    [LG E:] With LG, data privacy is extremely important to us, so the way that we're protecting it is in a hybrid manner. So, their individual usage of that device is stored on the device only. But the aggregated data of how users use the device altogether is stored on a cloud. So always privacy is protected for the individual consumer.
    [Q:] How else is LG protecting our information?
    [Vanderwaal:] Consumer data and the privacy to it is extremely important to LG. Everything is opt in.
    [Q:] Why should consumers care about AI?
    [LG E:] The challenge with all of us who are trying to explain artificial intelligence to consumers is what are the benefits? We'll use a lot of use cases to actually show consumers what kind of lifestyle benefits they'll see when they use artificial intelligence.
    [Q:] Like what?
    [LG E:] A great example of [the] ThinQ platform is understanding that a consumer washes clothes every Saturday. By knowing every Saturday, the AI platform, called LG ThinQ, is gradually accumulating what the behavior is and offering [recommendations or] suggestions for how to get better washing.
    [Q:] Is this all just a gimmick?
    [LG E:] With LG, we're trying to make artificial intelligence relevant. Because right now, there's a lot of artificial intelligence that consumers are using and not even knowing it. So we're trying to make it simple to understand and show the benefits of why artificial intelligence will help products. And to communicate the idea of products communicating with one another."

    What should we say?

    From a second report we got the following additional informations: "At CES 2019, LG envisions a future in which AI makes your life much easier
    Your car, washing machine and fridge will know all about you, and will adjust to your preferences, the company says.
    LG's president and CTO, I.P. Park, had a special guest appear on stage with him at the company's CES keynote Monday.
    As he began his talk about the future of artificial intelligence, the LG CLOi GuideBot, a white robot about four feet tall, rolled out to join him.
    [...]
    But Park mainly focused on LG's vision for the next generation of AI and how it'll change our lives. A three-part video played throughout the keynote demonstrated how AI will be integrated into various aspects of our lives, from our homes to our cars to our shopping and dining experiences.
    For example, your fridge will tell you when you're running low on milk and then order some on your behalf (taking into consideration whether you like low-fat milk, of course). You'll get a virtual fitting when you look in the mirror. Your washing machine will know how long to wash your clothes and what settings you like. Self-driving vehicles will figure out the optimum route based on the traffic situation, meaning you can kick back and watch movies or shop using giant gesture-controlled screens all around the vehicle.
    "[AI] should go from simply recognizing your command to really understanding your needs and your purpose," Park said. "Not just executing your orders, but reading your intentions, and recommending the best way of achieving it."
    [...]
    LG has been pushing into AI lately as it works to make its products - ranging from smartphones to washing machines to TVs - smarter and more helpful. The company held a press conference earlier Monday, during which it also touted its artificial intelligent system called LG ThinQ, which is designed to make proactive recommendations based on someone's personal preferences. [(See the first quoted report above.)]
    Park said LG is working with Adient, a company specializing in automotive seating, to develop AI-enabled smart seats that'll recognize you and adjust to your personal seating preferences. He also said LG will use its webOS interface, which is already used in its TVs, for future in-car entertainment experiences.
    The kind of smart living LG has envisioned for the future calls for the ability to transmit and process huge amounts of data, Park said. That's why 5G is a key enabler for "intelligent living on the go," he added.
    LG has teamed up with Qualcomm to one day enable functionalities such as allowing vehicles to communicate with one another and share perceptions of road conditions. [...]
    "AI is the future," Park said, "but only if we can achieve true intelligence."
    [Image caption:] Vanderaal talks about the network effect of several smart appliances like refrigerators and vacuum cleaners talking to each other and offering tips for your benefit.
    [Image caption:] Vanderwaal says that its new ThinQ AI devices will take your lifestyle data and broader usage patterns to offer you recommendations.
    [...]
    [Image caption:] Vanderwall holds up the LG V40 ThinQ, a flagship smartphone it launched last fall."

    What should we say?

    Simply read the notes

  • Incredible success continuing of the 4th of January 2019,
  • 100% OS = 100% success of the 8th of January 2019,
  • More evidences Audi and Disney mimicking C.S. and C.S. GmbH of the 7th of January 2019,
  • More evidences Continental mimicking C.S. and C.S. GmbH of the 8th of January 2019, and
  • More evidences Intel and Warner Bros. mimicking C.S. and C.S. GmbH of the 9th of January 2019.

    Also very nice is that an open source operating system is utilized as an interface for the open access to a proprietary AI platform, though we would like to give the hint that said interface might not be the best option (anymore) because we have an overall AI platform that includes all the features and functionalities of said operating system as well.

    Please keep in mind that in accordance with the Articles of Association (AoA) and the Terms of Service (ToS), as well as the overall goals and core principles of our Society for Ontological Performance and Reproduction (SOPR)

  • one or more applications or services of a first entity integrated with one or more applications or services of a second entity has or have to be registered with interfaces at our SOPR, so that
    • in particular interoperability with other voice-based systems, virtual assistants, and SoftBionic (SB) platforms is possible and
    • in general openess, fairness, and so on are guaranteed,

    and

  • not all end entity data are opt in. For example, if we find one or more convincing arguments that there is a significant social benefit by gathering data about the temperature, barometric pressure, humidity, and so on for weather forecasting, then this data is
    • collected form all its members and
    • made accessible for all its members

    by the SOPR.

    Btw.: These notes are merely a legal formalism publicated until we all have a signed agreement in our hands.

    -01:50, 01:55, and 12:21 UTC+1
    More evidences Samsung mimicking C.S. and C.S. GmbH

    *** Proof-reading mode ***
    As we already mentioned in the past, the company Samsung has also presented an Artificial Intelligence (AI) platform as the successor or OntoBot→Bixby. From a first report of a media company we got the following informations: "Hey Bixby, control my new Samsung TV, fridge - and robot from CES 2019
    Pretty soon, you won't be able to avoid Samsung's digital assistant when you use one of its devices.
    [...]
    "We have a bold vision to take a half a billion devices we sell every year and make them connected and intelligent," Samsung co-CEO HS Kim said during Monday's press conference. "Bixby is a scalable, open platform, and it will continue to grow as more partners join the ecosystem."
    [...]
    [...] by 2020 - the same time frame it's given for making all of its products internet-connected and integrated with Bixby.
    [...]
    For Samsung and numerous others, artificial intelligence is the next big wave of computing, and digital assistants are a step in that direction. Every tech heavyweight is investing in these assistants because they're heralded as the future of how we'll interact with our gadgets. The ultimate promise for the smart technology is to predict what you want before you even ask - but in most cases, the digital assistants just aren't smart enough yet. [...]
    [...]
    "With AI and other emerging technologies, we are hard at work improving those devices, helping them to better meet consumer needs and improve their daily lives," Lee said.
    [...]
    This year, Samsung said, Bixby will be embedded in its 2019 QLED and premium TVs, in smart appliances like refrigerators and washers, and in air conditioners, mobile devices, AI speakers and more.
    New Family Hub refrigerator software will let people interact in natural language to get answers to complicated questions, preset the oven, search for recipes and even call a[ ride-hailing service]. Bixby also shows information on the screen for a richer experience and displays an array of visual information. The new features will be available via an automatic update for most earlier Family Hub models.
    Samsung's new front-load washer also integrates Bixby. The digital assistant lets users control its smart features like getting recommendations for the best wash cycle, scheduling a cycle to be completed at a users' preferred time, automatically connecting the dryer cycle when the washer is done, or monitoring usage to efficiently manage their laundry appliances.
    Bixby also will be part of Samsung's new "Digital Cockpit" and robotics platforms.
    When it comes to robots, Samsung hopes to use AI to "manage activities of daily living." [...] It brought one of them, the Samsung Bot Care, on stage to demonstrate its health-tracking capabilities. The bot talked to a Samsung executive, instructing him to place a finger on the robot's screen to take his blood pressure. The robot could help elderly users monitor their health - or let their family members keep tabs from afar.
    "It's a partner for everyday tasks to help keep you healthy," Gary Lee, Samsung senior vice president and head of the company's AI efforts, said during the press conference. "Family members ... can check on your well-being even from far away."
    [...]
    For the Digital Cockpit in cars, Bixby will let drivers remotely check how much gas they have before going on a long road trip or to set the car temperature before heading out for the day. Using onboard cameras, the new Digital Cockpit recognizes drivers and passengers and sets up the car's personal space accordingly - adjusting the display preferences, seat height, lighting and queuing up favorite playlists. Passengers can even enjoy personalized screens on the rear seats [...].
    Samsung on Monday also talked up partnerships with third-party companies for Bixby. Uber and Ticketmaster already use Bixby to make their services smarter. Now iHeartRadio has joined as a partner, and Samsung said it "will continue to grow as more partners, such as Google, join the ecosystem."
    Samsung is "working very closely with Google" to make Google Maps, Gmail, Google Play and YouTube work with Bixby, Samsung's Kim said Monday.
    Along with new areas for Bixby, Samsung unveiled its core priorities when it comes to AI: fairness, accountability and transparency.
    "As it works to advance AI technology, Samsung is committed to ensuring the algorithms it builds are inclusive, the protection of user information and privacy are top priorities, and it's easy for consumers to understand what the company does with their data and how it is handled," the company said in a press release."

    What should we say?

    From a second report we got the following informations: "Samsung's Bixby-enabled smart washer does laundry in 30 minutes
    Use Bixby to schedule a wash cycle, get cleaning recommendations and more.
    The world of voice-enabled smart washing machines expanded today with Samsung's announcement of a new Bixby-enabled front-load washer [...].
    [...]
    Samsung says you'll be able to use Bixby commands to get suggestions on the right laundry cycle, check in on the washer's status and even scheduling a wash cycle in advance."

    What should we say?

    From a third report we got the following additional informations: "The company's new robotic-based platform addresses health care, air quality, retail and fitness.
    Samsung kicked off its CES 2019 press conference with its Bixby virtual assistant and ended it with an introduction of four new robotics initiatives, including Bot Care, a personal health care assistant that can handle an array of health monitoring tasks, some of which were demonstrated live on stage.
    [...]
    The other programs unveiled included Samsung Bot Air, which uses sensors to monitor air quality and detect pollution sources, and Bot Retail, a platform that will let robotic assistants field customer questions and requests, and help out with ordering and payment chores.
    [...]
    At that point, a white robot - Bot Care - rolled out on stage and engaged in a short conversation."

    What should we say?

    Please keep in mind that in accordance with the Articles of Association (AoA) and the Terms of Service (ToS), as well as the overall goals and core principles of our Society for Ontological Performance and Reproduction (SOPR)

  • one or more applications or services of a first entity integrated with one or more applications or services of a second entity has or have to be registered with interfaces at our SOPR, so that
    • in particular interoperability with other voice-based systems, virtual assistants, and SoftBionic (SB) platforms and
    • in general openess, fairness, and other core principles,

    are guaranteed and

  • not all end entity data are opt in. For example, if we find one or more convincing arguments that there is a significant social benefit by gathering data about the locations and movements of pedestrians, cyclists, drivers, or vehicles for traffic forecasting then this data is
    • collected form all its members and
    • made accessible for all its members

    by the SOPR.

    Btw.: These notes are merely a legal formalism publicated until we all have a signed agreement in our hands.

    12:22 and 23:04 UTC+1
    SOPR #162

    *** Work in progress - some better wording and some links to older issues missing ***
    This issue is about general thoughts about the following topics:

  • fairness,
  • neutrality,
  • transparency,
  • interoperability, and
  • convergence.

    Fairness
    The defintion of our standpoint in relation to the core priority of fairness is as follows:

  • We do not discriminate but interact and deal with all entities in the same fair way.
    This has already been proven with our License Model (LM) that was created with the Reasonable And Non-Discriminatory (RAND) terms, also known as Fair, Reasonable, And Non-Discriminatory (FRAND) terms, in mind.

    Neutrality
    For many if not all members of our SOPR neutrality is one of the many important core principles. Accordingly, we would like to define our standpoint in this respect as well:

  • We do not intervene in the competition, because this is the duty of the market regulators, antitrust watchdogs, or competition authorities.
  • We do not exploit the functions, capabilities, and fascilities of our SOPR for
    • political and
    • economical

    interests and activities of corporations and their subsidiaries and affiliates owned by

    • C.S.,
    • relatives of C.S., or
    • non-family entities,

    but only use the functions, capabilities, and fascilities of our SOPR in very reasonable, clearly communicated, and totally transparent ways.
    In virtually all cases when we said that an Ontologic Net (ON), Ontologic Web (OW), or Ontologic uniVerse (OV) platform also provides an application or a service, then this was only meant to

    • tell one or more members of our SOPR to
      • rethink an action,
      • avoid exploiting the membership in our SOPR,
      • understand that there is a significant social benefit for
        • stopping an activity and
        • supporting the overall goals of our SOPR.

      and

    • guarantee interoperability (see also the issue .. of the ... {online maps}).

    Indeed, showing the consequences of an action is not the best diplomatic way, but this way works in practice for the advantage of all participating entities, specifically in a wild or unregulated, highly competitve, and very vigorous environment.

  • We do not collect data about an individual SOPR member but only so-called
    • lifestyle data, like some companies do with their SoftBionic (SB) platforms including Artificial Intelligence (AI) platforms, and
    • corporate style data (see also the issue #142 of the 1st of October 2018),

    which are

    • aggregated anonymous data of how entities use the OS and our SOPR altogether,
    • stored in the facilities of our SOPR, and
    • accessible by all members of our SOPR,

    so that

    • privacy,
    • confidentiality,
    • security,
    • safety,

    in all matters, specifically of data, properties, and actions, are protected and guaranteed for all individual entities.
    Depending on the

  • complexity and
  • needed computing power and bandwith

    providing the data is for free or at cost price.

    Transparency
    Transparency is another core priority and our standpoint is defined as follows:

  • We do not make any business activities opaque if there is no very convincing and clearly communicated reason for this. SOPR members can get complete transparency about all activities of our SOPR anytime and anywhere.

    Interoperability
    As we discussed in several other issues {which ones?} and said in the issue #161 of the 10th of January 2019, "constructive interoperability and convergence benefitting everybody is only possible with our Society for Ontological Performance and Reproduction (SOPR)".

    But more and more hardware and software manufacturers and some clever (not really) plagiarists are still trying to realize both without us. That is comprehensible from the point of view of competition but not acceptable from the point of view of us.
    For example,

  • in the note More evidences LG Electronics mimicking C.S. and C.S. GmbH of today and
  • in the note More evidences Samsung mimicking C.S. and C.S. GmbH of today

    we discussed the

  • integration of virtual assistants and other applications, and services, and
  • collection of end entity data

    once again.

    In this relation, we would like to clarify that we did not prohibit that

  • an application or a service of a first entity is integrated with another application or another service of a second entity,
  • a user can freely select which application or service is used,

    and made no provisions about

  • how an application or a service is working, and controlled and used by an individual user, and
  • how two or more applications or services are used side by side and working concurrently on for example a cloud computing platform or an AI platform,

    But that is already the red line and no integration is allowed even not things are not allowed like for example

  • simulating an integration with the goal to circumvent the provision for handling interoperability and convergence by our SOPR, or
  • similar tricks like for example a ... {what?}, which by the way would be a part of our SOPR infrastructure anyway.

    that will be managed by our SOPR and realized with its infrastructure, platforms, applications, and services.

    But we would also like to recall that we will not make any further concessions. This includes the interoperability and the convergence of different

  • SoftBionic (SB) platforms including Artificial Intelligence (AI) platforms,

    that will be managed by our SOPR and realized with its infrastructure, platforms, applications, and services.

    Once again, the reasons for this and other provisions are that we provide our

  • open,
  • fair,
  • neutral,
  • accountable,
  • transparent, and
  • interoperable

    infrastructure, platforms, applications, and services of our SOPR and Ontologic Economic System (OES) for everybody participating.
    Therefore, another superordinate or higher alliance, ecosystem, or whatsoever is not needed.

    Convergence
    In the case of our SOPR, convergence is merely the result of the advancing interoperability following the vision of our Ontologic System.
    In the issues SOPR #148 of the 7th of November 2018 and SOPR #154 of the 10th of December 2018 we presented a related schedule for restoring the correct situation, which was carefully set up in such a way that affected entities have more than sufficient time to handle the transition effects.


    13.January.2019
    Ontonics Further steps
    We worked on a machine and came to the points where at first a new material was required, then a new composite, and eventually a new solution, because the initial problem and its intermediated solutions became more and more complex and created new problems with every of these steps. Now, we have found a solution that should work very well and we have already begun with its optimization and utilization for respectively integration in designated applications.
    We are very sure that our solution will become the new standard, because it is just too good in every aspect such as simplicity, safety, cost effectivity, etc., and so on.

    Ontoscope Further steps
    We improved the Head-Mounted Display (HMD) model mentioned in the Further steps of the 11th of January 2019.

    We also continued the work on one of our Head-Mounted Display (HMD) models, which has some very interesting features and allows incredible fascinating experiences, that will trigger much discussions, as usual with our creations in this special field of Mixed Reality (MR) in particular.

    In addition, we designed a new variant of one of our HMD types.

    As is the case with the first HMD model mentioned above, we are really curious about the reaction of the public.


    14.January.2019
    Style of Speed Further steps
    In the last weeks we worked on the designs of a

  • motor yacht for C.S. with the specification
    • length: 232 m / 761.15 ft,
    • beam: 40 m / 131.23 ft,
    • draft: 4.40 m / 14.44 ft, and
    • maximum speed: ≥ 50 kt,

    and

  • expedition yacht and support vessel for the crew and the toys with the specificaiton
    • length: 150 m / 492.13 ft,
    • beam: 30 m / 98.43 ft, and
    • draft: 4 m / 13.12 ft, and
    • maximum speed: ≥ 40 kt,

    both with environmental friendly respectively environmental saving propulsion systems, for sure.
    Luckily, New Panamax is opened since 2016 allowing a passage of vessels with a beam of up to 49 meter / 161 feet.

    We also used our gained knowledge for the designs of a

  • special motor yacht and
  • sail yacht

    for C.S., which are also used as technology carriers and club cruisers, and

  • some special vessels for various projects of our Hightech Office Ontonics.

    Just for bridging the time gap until more and more of our Vertical Take-Off and Landing (VTOL) aircrafts are ready to fly, literally spoken, we designed a new type of business jet, which will be highly appealing and convincing for every entity interested in such aircrafts.
    The foundational virtual analytics and also real model tests are already completed, so that the next steps are the finishing of the final design, which is standard work, and the construction of a prototype, which is also relatively easy due to our way of handling such endeavours.


    15.January.2019

    06:18, 15:28, and 17:55 UTC+1
    Clarification Announcement #2

    As it was with

  • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs) based on the techniques of the smart contract protocol and blockchain technique, and
  • Service-Oriented Computing (SOC),

    we are getting a more and more clear picture once again about the field of Vehicle-to-everything (V2X), also called Car-to-everything (Car-to-X or C2X), and also seeing why there is a little confusion and why many companies are presenting only very special solutions.
    It is around 14 to 18 years ago when we worked on and contributed to these fields.

    In this context, we found in our archive a document, which

  • was publicated in the year 2002,
  • was stored by us in the same year,
  • is about a research and development project for an in-vehicle middleware, which took place in the years 2000 to 2003, and
  • is related to our AutoSemantic extension package and our Dedicated Communications (DediCom) system included in it.

    The main point is that Cellular Vehicle-to-everything (C-V2X) is declared and presented by the Information and Communication Technology (ICT) and automotive industry sectors as new since around some few (2 or 3) years for example by

  • Qualcomm and Honda in 2014 with "Vehicle-to-pedestrian communications using DSRC" and
  • Qualcomm and Ford in 2016 with "C-V2X - complements other current vehicle sensor technologies by extending vehicle's ability to "see" further [...] Sensing the World [...] Conveying Intent [...] Situational Awareness",

    which suggests that it must be another variant related to

  • long-range communications based on for example
    • cellular networks,
  • medium-range communications, and
  • short-range communications based on for example
    • Wireless Local Area Network (WLAN),
    • Personal Area Network (PAN), and
    • Near-Field Communication (NFC) systems (e.g. Bluetooth),

    or simply said a convergent communication Cellular Local Area Network (CLAN), which works different than for example a wireless network mesh and is (also related to) our Dedicated (Short Range) Communications (DSRC) system DediCom that utilizes cellular network for swarming and short-range network, which leads again to CLAN, and compatiblity of DediCom to WLAN and NFC, which suggests integration of systems and leads again to CLAN and C-V2X.
    We already discussed this subject matter in relation to confusing and contradicting reports of a media company, wondered about the whole reporting and presenting at a very well known consumer electronics exhibition held last week, and publicated the related Clarification Announcement #1 of the 10th of January 2019 that will include content of this message as well.
    See also the case of the company Volkswagen in the Investigations::Car #380 of the 2nd of April 2013 that is about Car-to-X (C2X), the Clarification of the 11th of January 2018, the Hyper Connectivity suite of Style of Speed, and so on.

    Accordingly, we have to take a closer look what we could save in the era of the big spying and have added and integrated officially to said in-vehicle middleware, such as for example

  • basic properties of OS,
  • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs),
  • SoftBionics (SB),
  • Semantic (World Wide) Web (SWWW),
  • Service-Oriented Computing of the first generation (SOC 1.0) and Service-Oriented Computing of the second generation (SOC 2.0),
  • digital twins of environment (e.g. citiy and building) and mobile devices (e.g. vehicle handheld Ontoscope and other devices, and wearable devices),
  • fields of Cyber-Physical Systems of the second generation (CPS 2.0), Internet of Things of the second generation (IoT 2.0), and Networked Embedded Systems of the second generation (NES 2.0) including Industrial Internet of Things (IIoT) and Industry 4.0,
  • grid computing and cloud computing,
  • cognitive grid computing and SB platforms (e.g. AI platforms),
  • swarming or Swarm Intelligence System (SIS),
  • predict the future or electronic horizone,
  • proactive-drive,
  • Multimodal User Interface (MUI),
  • New Reality (NR),
    • Mixed Reality (MR) including
      • Augmented Reality (AR) and
      • Virtual Reality (VR)
    ,
  • Autonomous Vehicles (AVs),
  • New Mobility (NM),
  • etc.,

    update our related notes, which we left in proof-reading mode for this and other reasons, and apologize for any confusion. But the final result can only be better.

    In fact, we were quite right and have sufficient evidences for showing a causal link with our OS.

    Ontonics Further steps
    We continued the work mentioned in the Further steps of the 13th of January 2019 and were able to

  • confirm the viability and practicability of our solutions and
  • optimize them depending on their applications.

    But it comes even better. In the course of this, we also found out that in comparison with the common solutions our new ones are

  • less expensive by far and
  • at least around 90% more efficient, which has more consequences, because a limit has been crossed in this way that allows the realization of even more fascinating stuff.

    For example, in subsequent steps we developed a new

  • all-inclusive, self-sustained overall system,
  • material of our Hoverinium class,
  • variant of a toy, that will make children smile, and
  • variant of a system, that will make big children, aka. adults, smile.

    But the best of all is that we have solved some much bigger problems, as the public will see in the next future.


    16.January.2019

    00:07 and 15:42 UTC+1
    Preliminary investigation of Linux Foundation continued

    *** Work in progress - some better order and wording, balancing with other publications ***
    As not expected otherwise the infringements of the copyright and other rights as well as the provocation and the damage to the rights of C.S. and our corporation were even increased deliberately with the liburing library, which include separated and generalized functionality and its utilization, which again were initially introduced with the already disputed asynchronous code of the libaio library of the Linux kernel. From a first email we got the following informations:
    "After some arm twisting from Christoph [Hellwig], I finally caved and divorced the aio-poll patches from aio/libaio itself. The io_uring interface itself is useful and efficient, and after rebasing all the new goodies on top of that, there was little reason to retail the aio connection. Hence io_uring was born. This is what I previously called scqring for aio, but now as a standalone entity.
    The SQ ring [data structure] is an array of indexes into an array of io_uring_iocbs, which describe the IO to be done. The SQ CQ ring [data structure] is an array of io_uring_events, which describe a completion event. Both of these rings are mapped into the application through mmap(2)[, which is a method of memory-mapped file Input/Output (I/O or IO)], at special magic offsets. The application manipulates the ring directly, and then communicates with the kernel through these two system calls:
    [...]
    In terms of features, this has everything that the prior aio-poll postings did. Later patches add support for polled IO, fixed buffers, kernel side submission and polling, buffered aio, etc."
    From a second email we got the following additional informations:
    "io_uring is a submission queue (SQ) and completion queue (CQ) pair that an application can use to communicate with the kernel for doing IO. This isn't aio/libaio, but it provides a similar set of features, as well as some new ones:
    - io_uring is a lot more efficient than aio. A lot, and in many ways.
    - io_uring supports buffered aio. Not just that, but efficiently as well. Cached data isn't punted to an async context.
    - io_uring supports polled IO, it takes advantage of the blk-mq polling work that went into 5.0-rc.
    - io_uring supports kernel side submissions for polled IO. This enables IO without ever having to do a system call [respectively to work exception-less].
    - io_uring supports fixed buffers for O_DIRECT. Buffers can be registered after an io_uring context has been setup, which eliminates the need to do get_user_pages() / put_pages() for each and every IO.
    [...]
    io_uring_setup(entries, params)
    Sets up a context for doing async IO. On success, returns a file descriptor that the application can mmap to gain access to the SQ ring, CQ ring, and io_uring_sqe's."
    But as far as we understand the working of this SQ and CQ pair their ring data structures are used as shared memory like it is also done with the illegal plagiarisms called VirtuOS (see the Investigations::Multimedia of the 15th of May 2018) and {FlexSC with shmem?} Flexible System Call Scheduling with Exception-Less System Calls (FlexSC) (see the Investigations::Multimedia of the 18th of May 2018).
    Eventually, that io_uring mechanism is even more general and kernel-less, but still asynchronous.

    Furthermore, the SQ and CQ pair is also used for what? Exactly, for the Remote Direct Memory Access (RDMA) mechanism or technology as well (see the Clarification of the 4th of June 2018).
    So Christoph Hellwig and Jens Axboe of the company Facebook with the support of the subsidiary IBM→Red Hat are continuing with their looting of our original and unique work of art titled Ontologic System and created by C.S. with its Ontologic System Architecture (OSA) with the goal to steal the whole Ontologic System (OS). That is not acceptable.

    Indeed, the integration of a Monolithic operating system (Mos) with the approaches of the

  • Kernel-Less Operating System (KLOS) or
  • Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE) or
  • both

    resulting in hybrid operating systems consisting of

  • a Mos and a KLOS,
  • a Mos and a Microkernel-Based operating system (MBos), or
  • a Mos, a KLOS, and a MBOS

    is apart of the basic properties and the integrating Ontologic System Architecture (OSA) of our OS.

    Now, we have 2 evidences with the

  • disputed asynchronous code in the libaio library based on our exception-less system call mechanism and its kernel-less asynchronous variant, and
  • disputed exception-less and kernel-less code of the liburing library based on our hybrid operating system approach

    both based on our exception-less system call mechanism like the illegal plagiarisms FlexSC and VirtuOS
    that when taken alone and together show causal links to our orignal and unique, iconic Ontologic System (OS).

    Due to their originality and uniquness, as well as foundational significance in fields like for example

  • operating systems,
  • asynchronous networks (e.g. Internet), and
  • kernel-less systems (e.g. Internet and World Wide Web (WWW)), as well as
  • validated and verified, and validating and verifyng systems (e.g. intercommunicating distributed ledgers), and
  • Resilient Distributed Systems (RDSs) respectively Challenge-Tolerant and Trustworthy Distributed Systems (CTTDSs) (e.g. blockchain platforms and distributed ledgers),

    that did not make much progress in the last 20 years, C.S. owns at least the moral rights for this

  • general architecture of operating systems,
  • particular architecture of
    • Unix-based operating systems and
    • Unix-like operating systems.

    But a moral right belongs to the copyright right, as explained in the Clarification of the 19th of December 2018 and the note of the ...

    Moreover, with our OS we have also created and designed

  • a reflective, molecular or liquid, and modular system that defines the whole spectrum ranging from KLOS to Mos as another dimension of our New Reality (NR) (spectrum) and
  • the successors of the
    • Internet as an interconnected supercomputer and
    • WWW as a worldwide High Performance and High Productivity Computing System (HP²CS)

    comprising kernel-less asynchronous functionalities (see also the Clarification of the 4th of June 2018 and 27th of July 2018).

    Eventually, members of the Linux Foundation and similar foundations, associations, and groups want to "transform operating systems based on Linux [and Unix] into Ontologic Systems". But a quick look on the webpage Profile of the website of OntoLinux shows that this is exactly another aspect of our work of art respectively characteristic expression of ideas, which is protected by the copyright of C.S. and other rights of our corporation.
    It really seems to be too hard to understand that they are

  • making an illegal plagiarism with substantially similar expressions of ideas of the original and unqiue, iconic work of art of C.S. and
  • mimicking C.S. and our corporation, and
  • stealing
    • operating system features that are legally covered under the scope of the copyright protection of the Ontologic System and
    • a masterpiece of the 21st century in whole or in part and that even in front of the eyes of the worldwide public.

    In addition, the origin of a significant work of our corporation has not been named to the public even deliberately, which is an act of unfair business practice.

    Btw.:

  • We are still working and reviewing older related publications. Specifically, we are not happy with the formulation "kernel-less asynchronous or exception-less system call mechanism", because we emphasized the aspect of asynchronicity to make the first issue more understandable but instead it might have become a source of confusion, and therefore we substituted it for the moment with formulations like for example "exception-less system call mechanism and its kernel-less asynchronous variant". Additionally, we are focusing more on the aspects of the integration of Mos, MBos, KLOS, and SPACE, the world wide computer, and the overall OS once again.
  • We already made clear in the last past that we will contend for our rights respectively at the court if the disputed implementations of Free and Open Source Hardware and Software (FOSHS) foundations are not removed.
  • We will never understand why our work was not referenced correctly in the years 2007 to 2011 in return for licensing our OS as FOSS and CC. Now, we are compensating the stolen momentum with keeping it closed. As long as everybody is happy with this development there is no problem. Is not it?
  • We do not think that companies can go on like in the past and highly recommend that their managements change their strategies immediately, specifically their attempts to realize (better) alternatives or parallels to our OS that eventually do not exist.
  • We are not responsible for the legal issue that the leading instigator, agitator, and troublemaker, C. Hellwig, has with the subsidiary Dell Technologies→VMware since some years due to an alleged copyright infringement of code he and others wrote together and therefore we do not ... that he lets out his frustration on C.S. and our corporation. We are not sure if he and others are exactly covering themselves with glory and can really afford that mess.
  • We have noticed the cues on the share of 5%. But this share is the royalty for the performance of our Ontologic Applications and Ontologic Services (OAOS) in the Ontologic System, but not for the reproduction of our Ontologic System (Components) (OS(C)), for which we demand a fixed fee as royalty. Both, the low fixed fee and the low share, were chosen so that everybody makes a fair share.
    But many companies using FOSS of for example the fields of data center and cloud computing platform operation and management, operating system-level virtualization or containerization, and Big Data Processing (BDP) but not performing our OAOS would benefit without making their fair share.
    In addition, related FOSS foundations, associations, and similar groups would go on with copying more essential elements of our OS without showing their legal limitations.
    Furthermore, due to the

    a differentiation between system and platform, application, and service components is not possible, conceptually, which means that in the case we would give free the OS then we would give free the OAOS as well, eventually.


    17.January.2019
    Comment of the Day
    Intersup™

    In analogy to the designation Internet, which is the contraction for Interconnected network and designates the global network of interconnected computer networks, the designation Intersup is the contraction for Interconnected supercomputer and designates the global supercomputer of interconnected computers (see also the Clarification of the 4th of June 2018 and 21st of October 2018 for example).
    The Intersup is a foundation of the Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) of our Ontologic System (OS).


    19.January.2019
    Roboticle Further steps
    We have added a new model of a newer type of robot to our product range, which is used for logistics and can be used for many other areas of utilization as well.

    In addition, we have seen some images of the U.S.American Navy that suggest an interest in one of our new Unmanned Aerial Vehicles (UAVs), as usual. :)
    Accordingly, we have designed the UAV in different dimensions and for different applications, as wished and required.


    20.January.2019
    Style of Speed Further steps
    We worked on the motor yacht designated as CS-1 and mentioned in the Further steps of the 14th of January 2019, specifically on its propulsion system and exterior design.

    As results, we added

  • 4 more engines resulting in an overall configuration with
    • 4 × engines each having 37.2 MW (49,900 hp), which is nearly as much as a gas turbine of the Boeing 777, and
    • 4 × engines each having 3.8 MW (5,100 hp),

    and a total output of around 164 MW (220,000 hp),

  • our variant of a system that improves the overall performance and efficiency even more potentially resulting in a
    • speed increased to around 81 kt (150 km/h) and
    • fuel consumption decreased by at least 15%,
  • a slightly different alternative hull shape that fits better with the additional engines, and
  • a design of the exterior improved for the higher speed.

    We also integrated these improvements in the designs of our other yachts and vessels where advantageous.
    If these improvements can be realized in practice then we have more options in general and correspondingly might rethink some general decisions.


    24.January.2019
    Style of Speed Further steps
    We would like to give some more informations about our newest technologies developed and utilized for aerial vehicles with Vertical Take-Off and Landing (VTOL) capability like our new conventional VTOL aircrafts, that we mentioned in the Further steps of the 9th, 11th, 13th, and 31st of December 2018.

    The components of the propulsion system are in operation on a large scale and for many years, and virtually all other components as well as their integration are also very matured.

    The new VTOL aircrafts exceed the capabilities of helicopters and planes in many disciplines by far and even add additional capabilities.
    For example, we got the confirmation from our OntoLab that the all-weather capability also listed with the many other outstanding features in the Further steps of 9th of December 2018 allows them to operate in

  • virtually all temperature ranges even in icy conditions,
  • heavy rainfall,
  • heavy snowfall, and
  • gale, though
    • this depends on the specific configurations in particular and
    • stormy weather would not be such an ideal flight condition in general. :D

    Unsurprisingly, the certification, production, and economical operation are no problems at all with such excellent foundations.

    Finally, we would like to ask the national authorities to add all capabilities of our VTOL aircrafts to their regulations, because

  • an aircraft with one or more rotors and a parachute is not enough on the one hand and
  • we have the machines and it is the problem of the competitors to comply with safer regulations on the other hand.


    25.January.2019
    Comment of the Day
    Coco™
    Coco channel™
    Co²™

    Ontonics Web Further steps
    We added to our Innovation-Pipeline the new project:

  • Communication and Collaboration Channel (CoCo Channel)

    (see also the Ontologic Web Further steps of today).

    Ontologic Web Further steps
    We have begun to integrate the services of the communication platform and the collaboration platform integrated in our Ontologic System (OS) with cross-platform services. Correspondingly, the resulting platform is called Communication and Collaboration (CoCo) platform and allows the functionalities already listed in the Further steps of the 9th of November 2018 as well as its combination with other parts of our OS, such as Natural Language Processing (NLP) and Intelligent Personal Assistant (IPA) (see also the Ontonics Further steps of today).

    At this point, we have not decided if and how much of the services will be provided free of

  • advertisement and
  • data collection and provision controlled by the users and other end entities,

    though these decisions also depend on external factors.


    27.January.2019
    Style of Speed Further steps
    Since some years we are looking at a basic working principle and since some months we are looking at two more special working principles of aircrafts as well as the effects of changing the values of the relevant parameters.
    As a first result of this basic research, we developed a new variant of a system, which integrates the advantages of all principles within the limits of trade-offs that have to be made and in accordance with the requirement specification.
    In the next development step we have to find a convincing design for a first model.


    28.January.2019
    Style of Speed Further steps
    In relation to the variant of a system mentioned in the Further steps of 27th of January 2019 (yesterday), we have created a basic design and once we had this we developed several variants and improvements.

    In summary, we made the following steps so far:

  • In the first step, we were able to substiute an essential element of a first older system in various ways.
  • In the second step, we reduced the dimension of a conventional Vertical Take-Off and Landing (VTOL) aircraft based on the first older system by 50%.
  • In the third step, we improved the overall efficiency of our first new systems by around 90%.
  • In the fourth step, we integrated the first older system and our first new systems with a second older system to our second new systems.
  • In the fifth step, we reduced the dimension of our new conventional VTOL aircrafts by at least 50% once again.

    Now, our new VTOL aircrafts have the dimensons of common VTOL aircrafts but in addition all the other advantages, specifically our extraordinary fall-proof, all-aloft, non-crash, whisper, and all-weather capabilities.

    Some experts will scratch their heads, specifically how we utilized existing knowledge and items, while non-experts will wonder what is possible with contemporary knowledge and items, as well as ... creativity.
    And no, it is not a multicopter, tiltrotor, or tiltwing.


    29.January.2019

    08:50 UTC+1
    More evidences others mimicking C.S. and C.S. GmbH

    We found several initiatives and companies in the so-called field of citizen data, that have copied projects of us like for example our project Castle in the Cloud, which

  • is listed in the Innovation-Pipeline of Ontonics and
  • "acts like a mini fortress for all your personal data",

    and with which we have

  • created our personal cloud computing and the Intra Cloud computing paradigms and
  • begun the social discussion about data privacy and data ownership in relation to the cloud computing paradigm, when even a Person of Ordinary Skill In The Art (POSITA) did not know that their data is collected by internet companies,

    and hence enjoy the copyright for our

  • description respectively expression of idea in general and
  • combination with our original and unique, iconic works of art titled Ontologic System and Ontoscope, and created by C.S. in particular.

    All such projects are only legal if they comply with the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR), even if they are supported by national governments.
    We also would like to note that we already have a much better solution with the

  • activities and facilities of our SOPR, including the foundational
    • infrastructure,
    • systems, including the
      • Ontologic Economic System (OES) and
      • marketplace for everything system,
    • platforms, and
    • services

    and

  • other activities and facilities of our Hightech Office Ontonics, including other
    • infrastructures,
    • systems,
    • platforms, and
    • services

    of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV).
    Therefore, another superordinate or higher alliance, ecosystem, or whatsoever is not needed, as it is definitely the case with the lying press, a fighting back, or whatsoever.

    01:17 and 22:29 UTC+1
    SOPR #163

    *** Work in progress - better wording ***
    The topics of this issue are:

  • external internet services,
  • Superstructure, and
  • legal matters.

    External internet services
    We did a considerable job on the planning work in relation to the meshing or combination and integration of external internet services in our platforms, specifically of social networks or social media platforms.

    In this respect, we earnestly do not expect that the profit of a member of our Society for Ontological Performance and Reproduction (SOPR) will decline due to the activities of our SOPR. Quite contrary, with the Ontologic System (OS) we have created, proposed, and begun to install in some parts a New Reality (NR), which is already becoming the new backbone of civilization and showing its potential for social, scientifical, and technological, as well as economical developments and improvements. Eventually, the success of a SOPR member still depends on the quantity and quality of the own performance and the choice of the users.

    Superstructure
    Another point of thoughts is

  • the realization and the construction of our Superstructure and our Hovercity and Hoverland projects, including airspace management systems, which belong to the projects supported by our SOPR for the benefit of all members, and
  • their integration with related technologies, applications, and services, which we have to allow for licensing due to our general principles of
    • guaranteeing neutrality,
    • ensuring equal opportunity between Geographic Information Systems (GISs), online maps and virtual globes, and similar technologies, applications, and services, and
    • avoiding the abuse of market power

    (see the issue #162 of the 12th of January 2019), though we have superordinate infrastructures and technologies, applications, and services already (see also the issue #121 of the 29th of May 2018 and the issue Superstructure #8 of the 14th of May 2017).

    In this relation, we would like to see joint ventures of local and foreign companies, when they help us in

  • building up our
    • Highway in the Sky and
    • Silk Skyway with its hubs, superhubs, megahubs, and gigahubs

    and

  • providing services in fields like for example
    • logistics,
    • tourism, and
    • traffic management

    in America, West Europe, Middle East, V.R.China, and South Africa, as well as other areas.

    Legal matters
    At this point, we would like to mention the following points (once again):

  • Companies that have signed our agreement, including the the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our SOPR, are instantly becoming eligible providers of our SOPR and will be contacted As Soon As Possible (ASAP) depending on related needs.
  • Realizations of parts of the infrastructure and management system of our SOPR are not allowed, or said in other words, technologies (e.g. systems and platforms), products (e.g. applications), and services that reproduce or perform elements of our overall OS are not allowed if they
    • only work with said realizations but not with our infrastructure and management system respectively
    • are not integrated in our infrastructure and management system for guaranteeing interoperability and other constructive criteria.
  • We are observing related cooperations and collaborations of companies since quite some time and since some months we are trying to answer the question if they are
    • merely working together with customers of their services or
    • eventually
      • questioning the competency,
      • disturbing the goals, or
      • even threatening the integrity

      of our SOPR with such collaborations, that should camouflage their activities.

  • We are not sure if customers of cloud computing platforms and so-called Artificial Intelligence (AI) platforms are aware about our SOPR and the AoA and the ToS with the LM of our SOPR.
  • On the 30th of November 2018 we made the following comment: "Because a separation of the single services is becoming more and more difficult or even impossible, we already suggested to take the whole revenue generated in the fields of web services, cloud computing, and so on as calculation base for our share", in issue #153 of the 5th of December 2018 we also added a calculation factor for fair accounting, and today we added that this also includes the service delivery network grid and the so-called multi-cloud.
  • On the 23rd of January 2019 we made the following comment: "We are also wondering why companies are still investing in copies of our infraststructure and management system of our SOPR well knowing that they are mandatory.
    With cheap marketing tricks and unfair business practices no one is going to win anything anymore even not a potted plant."
  • Companies that want to provide combined and integrated technologies, products, and services based on the fields of for example
  • Multimodal User Interface (MUI),
  • SoftBionics (SB), including
    • Cognitive Software Agent System (CSAS) or virtual assistant,
    • Multi-Agent System (MAS),
    • Swarm Intelligence (SI) or Swarm Computing (SC),
    • etc.,
  • Mixed Reality Environments (MREs), including
    • Augmented Reality Environments (AREs) and
    • Virtual Reality Environments (VREs),
  • Cyber-Physical Systems of the second generation (CPS 2.0), Internet of Things of the second generation (IoT 2.0), and Networked Embedded Systems of the second generation (NES 2.0), as well as Ubiquitous Computing of the second generation (UbiC 2.0) including the subfields of Industrial Internet of Things (IIoT) and Industry 4.0, and
  • all the other original and unique elements, and their combinations and integrations of our Ontologic System we listed so many times in related explanations, clarifications, investigations, notes, comments, and other publications

    have to comply with the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our SOPR, which also addresse such a multi-cloud.

  • We already said in the past that
  • there is no gold rush,
  • issues with FOSHS is not our problem, and
  • entities have to wait on our SOPR, and
  • if entities think they can outpace or ignore us then we will demand either to
    • hand over all technologies (e.g. systems, infrastructures, and platforms), products (e.g. applications), and services to our SOPR for no cost, or
    • transition to the infrastructure of our SOPR on their own cost and pay a compensation for damages but then the triple damages and contribute other measures.
  • We will not renegotiate the consensus and make any further concessions.


    30.January.2019

    21:05, 23:39, and 26:14 UTC+1
    SOPR #164

    *** Work in progress - link to former issue missing ***
    In this issue we would like to share some thoughts about these topics:

  • case of no consensus and
  • legal matters.

    Case of no consensus

    At first, we would like to recall that the consensus, which was found and constitutes the basis for the corresponding agreement, still includes the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our SOPR with regulations concerning for example the

  • compensation of damages from the 1st of January 2010,
  • Interconnected supercomputer (Intersup),
  • (multi-)cloud computing platforms, specifically
    • designation or naming,
    • description or modelling,
    • registration or publication,
    • translation,
    • lookup (name, directory, and discovery),
    • discovery,
    • routing,
    • forwarding,
    • caching,
    • brokerage or brokering,
    • mediation,
    • moderation,
    • composition,
    • coordination,
    • orchestration, and
    • interoperability
      of
    • applications and
    • services (e.g. provision of signals and data, messaging, and notification of events),
  • Free and Open Source Hardware and Software (FOSHS), specifically
    • no performance and reproduction of Ontologic System Components (OSC) and Ontoscope Components (OsC) and
    • no performance and reproduction of Ontologic Applications and Ontologic Services (OAOS)
      under a license that is not accredited by our SOPR,
  • no explicit naming of C.S., our corporation, or a business unit of our corporation,
  • and so on.

    In this conjunction, we would like to make it clear that we are not unhappy with the proposed consensus, quite the contrary, but we also are thinking about options for the case that said consensus cannot be implemented due to

  • the reason that too many or even all (designated and already virtual) members of the Society for Ontological Performance and Reproduction (SOPR) do not sign the agreement, or
  • several reasons that came forward in the last year and were not considered when drafting the consensus and the agreement, but let more and more to a new situation where it might be more advantageous for us to withdraw our consent and instead make a clear-cut delimitation, which would mean that for example
    • providers of platforms for (multi)cloud computing, FOSHS, AI, mobility, and other types of systems, applications, and services, and
    • manufacturers of vehicle platforms
    • do not get a license from us,
    • become merely suppliers of the business units of our corporation doing the big businesses alone on top of their platforms, and
    • still have to pay damages, which would be much higher than the suggested ridiculously low fixed fees.

    Legal matters
    In addition, we are looking for measures to handle the issue of huge patent portfolios filed by large companies of the Information and Communication Technology (ICT) and engineering industry sectors in the cases that they

  • interfere with the freedom of expression of C.S. when working with and on the Ontologic System (OS) and the Ontoscope (Os),
  • concern every member of the SOPR,
  • disturb the goals or even threaten the integrity of our SORP,
  • and so on.

    So far, we found the following options:

  • handing over all patents and other Intellectual Properties (IPs) that have a causal link with our OS and our Os, as already suggested in the last past (see the issue ... of the ...),
  • increasing the fixed fees and share of revenue when using patented items,
  • prohibiting the use of patented items,
  • etc.

    when reproducing or performing or both of our OS.
    This activity should not be misunderstood as a rejection of patents in general or an abuse of market power in particular but as a way to act against false cleverness.

    By the way: The OS is your friend and even much more.


    31.January.2019
    Comment of the Day
    Billboard drone™

    Roboticle Further steps
    Needless to say, we also developed several new safety drones based on the new Vertical Take-Off and Landing (VTOL) aircrafts of our other business unit Style of Speed (see the Style of Speed Further steps of the 24th as well as 27th and 28th of January 2019).

    Needless to say, some of these new safety drones are configured as

  • Delivery Drones™ and added to a hybrid transport system that is based on our hover technologies Flying Box, Flying Container, and Flying Pallet (see also the Roboticle Further steps of the 7th of February 2017 and 13th of May 2017, and 15th of May 2017, and the Style of Speed Further steps of the 21st of April 2017 and 13th of May 2017),
  • rescue drones,
  • surveillance drones,
  • inspection drones,
  • billboard drones,
  • agricultural drones,
  • recreational model drones, and also
  • Flying Cams, as well as
  • High-Altitude Platform Stations (HAPSs) for
    • telecommunications,
    • environment monitoring, and
    • Intelligence, Surveillance and Reconnaissance (ISR)

    (see also the Further steps of the 19th of January 2019).

    Needless to say, customers who are happy members of the Society for Ontological Performance and Reproduction (SOPR) and exclusively use our airspace management system or traffic management system for the 3D Highway in the Sky of our Superstructure, both created by C.S. and managed by our other business unit and Hightech Office Ontonics as well, are eligible for using our safety drones as part of their platforms, applications, and services as well.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer