Home → News 2018 August
 
 
News 2018 August
   
 

01.August.2018
Ontonics Further steps

We extended the range of application of one of our systems.

03:46, 12:22, and 24:26 UTC+2
SOPR #131

*** Proof-reading mode ***

We reviewed the past issues

  • #121 of the 29th of May 2018,
  • #124 of the 4th of July 2018,
  • #126 of the 10th of July 2018,
  • #129 of the 23rd of July 2018, and
  • #130 of the 29th of July 2018.

    On the basis of this review, the members of our Society for Ontological Performance and Reproduction (SOPR), actually only one with the supervisor, have decided unanimously with a sufficient majority of 100% of the eligible voters to update or extend the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) or do both where required and reasonable to address the following subject matters:

  • members' rights in the SOPR,
  • term and renewal of AoA and ToS,
  • labelling option,
  • regulation of FTRTDSs,
  • data center facilities, and
  • further legal steps.

    Members' rights in the SOPR
    We discussed the transformation of the Steering Committees into Consulting Committees in the issue #124 of the 4th of July 2018 already. In this respect, we are not sure how wide-ranging the influence or even the control of the members of the SOPR could be and should be for not blurring or destroying the characterstics and nature of the work of art titled Ontologic System and created by C.S. even more than was already done by the industries and the governments.
    Sadly to say, we came to the conclusion once again that specifically the intentions and goals of the entities on the highest and lowest levels of power do not allow to create a perfect world or at least a better place by design, and therefore we have only a relatively narrow margin for shaping the decision making process inside the SOPR.

    Term and renewal of AoA and ToS
    We discussed the regulation about the automatic renewal of the AoA and the ToS in the issue #130 of the 29th of July 2018 already.
    Accordingly, the members of our SOPR, actually only one with the supervisor, have decided unanimously with a sufficient majority of 100% of the eligible voters that the

  • automatic renewal of the AoA and ToS every five (5) years remains untouched and
  • announcement period is set to the term of agreement, which is five (5) years, but
  • all decisions related to the announcement period, the term of agreement, and the renewal can only be decided by the supervisor respectively C.S., Ontonics, or another right holder, but not any other member of the SOPR.

    Labelling option
    We are still weighing the very reasonable pro and contra arguments for introducing the option in the LM of not naming C.S., Ontonics, our SOPR, or two or more of them and not labelling products, applications, and services for an extra of 50% of a related royalty of our SOPR, as discussed in the issue #126 of the 10th of July 2018. The latest convincing contra argument has been worked out in-house and emphasized by the Chief Executive Officer (CEO) of one of the leading Ontoscope manufacturers, the company Apple, that "tariffs [...] show up as a tax on the consumer and wind up resulting in lower economic growth and sometimes can bring about significant risk of unintended consequences".

    Regulation of FTRTDSs

    The last weeks and especially the last developments in relation with High Performance and High Productivity Computing Systems (HP²CSs) and Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs) showed that some sections of the AoA and the ToS with the LM of our SOPR require an update or an extension as well.
    We also concluded that the fields of

  • HP²CSs and FTRTDSs based on for example the
    • cryptographically chained or interlinked records, including
      • blockchain,
      • crptographically chained Direct Acyclic Graph (DAG), and
      • similar data structures,
    • Byzantine Fault Tolerance (BFT) protocols,
    • Byzantine-Resilient Replication (BRR) method,
    • smart contract protocol, and
    • blockchain technique,
    • secure logs on the basis of cryptography, specifically
      • cryptographically chained or interlinked records,
    • validated, verified, validating, verifying, cryptographically secured, and/or distributed transaction log systems and data stores,
  • grid computing and cloud computing systems,
  • Big Data Processing (BDP) systems, and
  • trusted, safe, and secure SoftBionics (SB)

    show a sufficient amount of originality, uniqueness, innovativeness, substance, and significance to handle them as fields of their own.
    Therefore, we recall our declaration that their combinations and integrations as elements of our Ontologic System (OS) with its Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), as well as Ontologic Applications and Ontologic Services (OAOS) were unforeseeable and unexpected with the sole exception of C.S. with the OS.

    Accordingly, the members of our SOPR, actually only one with the supervisor, have decided unanimously with a sufficient majority of 100% of the eligible voters to extend the regulation made in the issue #121 of the 29th of May 2018 in accordance with the discussions made in the issues #129 of the 23rd of July 2018 and #130 of the 29th of July 2018 that

  • our universal ledger or alpha ledger, which
    • is based on our network of telescopes respectively universal consensus or alpha consensus,
    • provides a real-time, auditable (transaction) log of ordered evidences of events, tokens, blocks, or records, which are cryptographically linked to arbitrary data, which again is replicated selectively among only those entities entitled to view or interact with it, and
    • combines a OS-wide and universe-wide, replicated Distributed Ledger Technology (DLT) log and partially replicated reference data, so that each participant can create their subsection of the universal ledger or alpha ledger with full confidence that it is consistent with that of other entities

    and

  • the separation of HP²CSs and FTRTDSs on the one side and (electronic) financial systems related to digital currencies on the other side, as discussed in the issue #130 of the 29th of July 2018

    are now mandatory.

    These steps

  • were unavoidable on the one hand but
  • do not constitute a problem for members of the SOPR on the other hand, because the SOPR is already regulating
    • FTRTDSs on the one side and
    • digital currencies on the other side.

    All SOPR members, such as

  • providers of cloud computing platforms respectively something as a Service (aaS), distributed ledgers, etc.,
  • organizations, foundations, or associations, such as the Mobility Open Blockchain Initiative (MOBI) for example, and
  • governments and their federal authorities,

    can proceed as usual. They merely have to combine, integrate, and unify their related systems, applications, and services with the facilities provided by the SOPR.

    Data center facilities
    The related data center facilities of the SOPR will also provide

  • every functionality of illegal free source and open source software as legal software licensed by the SOPR End-User License Agreement (SOPR EULA),
  • a foundation for an
    • Everything as a Service (EaaS) platform,
    • Electronic Commerce System (ECS), including Marketplace for Everything (MfE) platform,
    • IDentity and Access Management System (IDAMS),
    • Electronic Health Record (EHR) or Electronic Medical Record (EMR),
    • electronic governance system, and
    • electronic government system
      • Electronic System for Travel Information and Authorization (ESTIA),
      • etc.,

    as well as

  • the infrastructures and services of the
    • distributed computing and parallel computing platforms of the business units of our corporation (see also the Clarification of the 4th of June 2018) and
    • special projects of our Hightech Office Ontonics,

    and allign them with the

  • Blockchain as a Service (BaaS) of cloud computing platforms and other blockchain-based systems of SOPR members, and
  • commodities and digital currencies issued and managed by our Ontologic Bank (OntoBank) of our Ontologic Financial System (OFinS) of our SOPR, as discussed in the issue #113 of the 18th of March 2018 and #129 of the 23rd of July 2018.

    In this relation, we welcome the suggestion to introduce data partnerships between the

  • members of our SOPR and especially
  • public authorities and our SOPR.

    Further legal steps
    We will continue with the introduction of such measures as decided above as long as specific destructive activities are not stopped.
    Furthermore, we are increasing the legal pressure on all entities, who still do not take us serious and instead fall prey to the fallacy that our Intellectual Properties (IP) are a free lunch. Correspondingly, we are already working out those fields, where we can issue preliminary injunctions immediately, if required.


    03.August.2018

    Clarification #1

    In reports about a mathematician, who got an award for his work in the field of arithmetic geometry recently, we noted in the texts that he was presented as C.S. (keyword synthesthetics) on the one hand and in a related photo that he has drawn on a table a number sequence including 2 - 3 - 5 (sequence of Fibonacci numbers).
    In fact, C.S. did that all before, as can be seen with the

  • drawings of for example the Fibonacci number sequence shown in the Originals of the 22nd of September 2009, and
  • The Proposal and the subsequent work of art titled Ontologic System with our Caliber/Calibre that even integrates the fields of algorithmic information theory and arithmetic geometry and also provides a part of the foundation for the new field called Ontonics.

    Clarification #2

    *** Work in progress - might still include some incorrect statements ***
    We already said in the past that we have not looked in detail at the activities related to the blockchain technique at first, because

  • we were very sure all the time from what we noticed at all that it is merely a secure data structure or data store and hence its functionality is already included in our Ontologic System (OS) and also
  • knew that more functionality, flexibility, and performance is provided by our OS.

    Only some years later we began to show the facts, explain the foundations, and kill the myth (see the messages of the 5th of July 2017 and the other related messages of the months July 2017 and October 2017, and also the Clarification of the 21st of March 2018, the note Dump that island system of the 10.May.2018, and the OntoLix and OntoLinux Further steps or Clarification of the 6th of April 2018).

    As longer we worked on this matter as better and better we were able to return to mind and show that the blockchain technique, other types of distributed ledgers, and similar data stores and databases are included in our OS or even are copied from it or both. For example,

  • in the Clarification of the 11th of May 2018 we compared the works of Nick Szabo and Jörg F.Wittenberger with our OS, and as one of the highlights we could prove that cryptocurrencies, like e.g. Bitcoin, are based on an original and unique, essential part of our OS, and
  • in the Clarification of the 22nd of July 2018 we could give another and more simple and explicit prove that the functionality of a distributed ledger is based on the functionality of a cryptographically secured and distributed transaction log system included in our OS.

    But this is not where the story ends, as even we thought at first.
    On the 2nd of August 2018 (yesterday), we found out that the same group of authors of an online encyclopedia, that referenced the works of Nick Szabo, Jörg F.Wittenberger, and other related works, also claimed that Satoshi Nakamoto has invented the blockchain technique with his cryptocurrency Bitcoin. The latter made us wonder due to the reasons that

  • N. Szabo already
    • described a data store or database based on cryptographically chained or interlinked records "on top of a Byzantine-resilient replicated object service to maintain the integrity of chains of property titles" respectively a decentralized or distributed ledger, and also
    • suggested its application in the Internet, but
    • missed the combination and integration of
      • secure respectively validated or verified storing of data, specifically by a decentralized or distributed ledger, with
      • distributed or decentralized validating or verifying of said secured data on the basis of distributed computing and decentralized computing paradigms, specifically
        • Peer-to-Peer (P2P) computing,
        • grid computing, and
        • swarm computing, collaborative computing, and volunteer computing (e.g. Berkeley Open Infrastructure for Network Computing (BOINC) middleware system),
  • Satoshi Nakamoto titled his work "Bitcoin: A Peer-to-Peer Electronic Cash System", which
    • is said to solve for the first time the long-standing problem of double-spending without the need of a trusted authority or central server on the one hand and
    • emphasizes the P2P computing paradigm and its utilization for volunteer computing, collaborative computing, and swarm computing on the other hand,

    and

  • Jörg F.Wittenberger
    • described the Peer-to-Peer (P2P) Virtual Machine (VM) Askemos to execute rights or rules in relation to a vote-based consensus protocol, which was later interpreted or redefined as smart contracts and Ricardian contracts, and also
    • missed the blockchain technique,

    which before also led to our proof in relation with cryptocurrencies and blockchain-based systems with Virtual Machine (VM) and/or smart contract protocol (see once again the Clarification of the 11th of May 2018) and now to our allegation that it is not only about cryptocurrencies anymore but also about the blockchain technique and as a further implication about the whole distributed ledger technology.

    N. Szabo indeed discussed in this context all required elements, including

  • "a secure property title service uses cryptographic hash functions and digital signatures", which has been taken by S. Nakamoto for the definition of the "electronic [bit]coin as a chain of digital signatures", or being more precise, for the transaction (data) of an electronic or digital coin,
  • bit commitment "using one-way functions", whereby "[one-way functions are the most basic building block of cryptography [and a] common kind of a one-way function is a cryptographic hash function", which has been integrated by S. Nakamoto in the blocks respectively Merkle trees that hashed and encode the transaction data as well,
  • secure distributed time-stamping service, where "users [are] sending a cryptographic hash (a.k.a. message digest) of their document to the [replicated] time-stamping servers[, which] chain messages and cl[o]ck ticks together by order of arrival [... and] can break ambiguities in order of arrival with a protocol such as fair coin tossing to achieve a fair total order", which again has been taken by S. Nakamoto, whereby the message digest are the group of bit commitments of the transaction data respectivel Merkle trees to be time-stamped, and also
  • fair coin tossing used as a global clock in relation with logical broadcast to break ties, which has been substituted by S. Nakamoto with the Proof-of-Work (PoW) approach of for example Hashcash, which again has been integrated in the cryptographically chained blocks of the transaction data and the secure time-stamping server.

    The remaining and somehow more academical question is now how a blockchain is definied. Is it the

  • data store on the basis of cryptographically chained or interlinked blocks or records, then the resulting chain of blocks or records would be merely a (specific) variant of a secure log or a transaction log,
  • integration of cryptographically chained blocks of a secure (transactional) data store service with cryptographically chained blocks of a secure time-stamping service,
  • integration of a cryptographically secured log or transaction log with the PoW consensus protocol respectively integration of secure storing and secure computing,
  • integration of said elements as a distributed system or decentralized system, or
  • a mixture of these possibilities?

    But eventually,

  • we designed the overall approach with the basic properties and integrating Ontologic System Architecture (OSA) of our Ontologic System (OS), that comprises the
    • works discussed by N. Szabo,
    • work of Jörg F.Wittenberger,
    • integration of all and hence the integration of the works of N. Szabo, Jörg F.Wittenberger, and others, and also
    • blockchain technique,

    while S. Nakamoto designed a specific variant of said features of our OS as a crpytocurrency, and

  • therefore at least our claims in relation with
    • distributed ledgers of this specific type and similar types,
    • applications, specifically cryptocurrencies like Bitcoin, and
    • blockchain-based systems with a Virtual Machine (VM) respectively blockchain 2.0

    remain intact (see once again the Clarification of the 11th of May 2018 and 22nd of July 2018).

    We also wondered about the reactions of large companies, federal institutions, and scientific institutions when we announced our SOPR, specifically because they are all happy with it.
    But this is still not where the story ends.

    By the way: Many of our publications are written on the basis of a commitment scheme, such as the lockstep protocol, so that we can prove that the subject matter is included in our OS indeed, as we explained several times in the past in a more detailed and potential more complicated way. One of the best examples for this is our network of telescopes and its combinaton and integration with the other relevant features of our OS with which we have proven that the works described by N. Szabo in the field of FTRTDS and the fields of Byzantine fault-tolerant Distributed operating system (Dos) and Distributed Virtual Machine (DVM) are included in our OS.

    One main point is the feature of the R4 file system that a file can be a directory as well and another one is our integration of it with a distributed persistent object store of the Cognac system based on the fault-tolerant and reliable distributed operating system Apertos. This allows to

  • take a transaction log (file), which includes a transaction data or a group of transaction data and one or more corresponding timestamps by the way,
  • cryptographically secure this transaction log (file) by using a cryptographically secure hash for example,
  • utilze this secure (transaction) log (file) as a directory,
  • put the next cryptographically secured transaction log (file) into said directory respectively said secure (transaction) log (file),
  • cryptographically secure said directory or file,
  • and so on,

    or to

  • take a different scheme as part of a Fault-Tolerant, Reliable, and Trustworthy Distributed System (FTRTDS).

    Eventually, we end up with some kind of a blockchain and distributed ledger.


    11.August.2018

    18:03 UTC+2
    SOPR is coming to you as well

    *** Work in progress - fees not balanced ***
    We made a first estimation of the fees that are due for a standalone smart 3D and 360° camera, which is used for (real-time) 3D scanning, modeling, Image-Based Modeling and Rendering (IBMR), and reconstructing, image stitching, user tracking, and so on, and has functionalities based on the fields of Artificial Intelligence (AI), Machine Learning (ML), Computer Vision (CV), and Cognitive Vision (CV or CogV), including Natural Image Processing (NIP), as well as our MobileKinetic technology, or simply said a handheld Ontoscope of our smartcamera series (see also the Comment of the Day of the 13th of September 2017, 5th of October 2017, and 17th of October 2017).

    The following quotes from the description of such a smartcamera are undeniable evidences that prove the causal link to our Ontologic System (OS) and our Ontoscope (Os):
    "With the CNN (convolutional neural network) deep learning framework, our system can recognize the main subjects in the videos. The AI model is developed to simulate human habits and continuously keep tracking the important subject in every frame. [First of all, we have to clearify that CNNs belong to the subfield Machine Learning (ML) of the field of Softbionics (SB). Furthermore, see the webpage Ontologic Applications, the OntoScope software component, as well as the section Human Simulation/Holomer of the webpage Links to Software of the website of OntoLinux.]",
    "Keep your target at the center of the frame with our AI smart tracking functionality. Touch the screen to focus on the object you want to track and let the camera do the rest. [Obviously, we have here the interplay of our OS with its OntoBot software component and our MobileKinetic technology.]",
    "[The functionality] allows users to capture every part of a motion in 360. [...] The technology, based on real-time moving target recognition and segmentation, allows users to create amazing multiplicity video with just a signal touch. [See the comments made to the previous quotes and also the Insight Segmentation and Registration Toolkit (ITK) listed in the section Visualization of the webpage Links to Software and our related Votography technology.]",
    "The depth estimation algorithms baked into the [3D and 360°] cameras were based on the latest deep learning technology. With these smart vision abilities, [the 3D and 360° cameras] could be able to estimate accurate depth map from even a single 360 degree footage. Once the depth maps are generated, our 3D reconstruction engine (containing the computer vision technology called SLAM: simultaneous mapping and localization), kicks in and calculates the position of the camera, while drawing the 3D point cloud of the environment. The 3D models and virtual tours will be done after all this processing. [SLAM algorithms were developed in the field of robotic mapping and navigation, and the Ontoscope (Os) has always been described as the hardware related part of our Ontologic System (OS) and compared with an immobile robot (immobot), a cognitive robot head, a cyborg head, and an image of a users head or C.S.' head as part of a cybernetic self-portrait (see the Chapter 5 of The Proposal, the webpage Ontologic Applications, the section Robotics of the webpage Links to Hardware of the website of OntoLinux, the Announcement Ontoscope 2.0 of the 10th of August 2008, the Ontoscope Further steps of the 11th of July 2009, and the Investigations::Multimedia of the 08th of September 2009) in addition to the Golden Compass, Tricorder of the Star Trek saga, and other items, as well as the hardware operated with respectively in our OS and used as the access device and ontologic instrument for respectively in our OS. Obviously, we have here our Os and the interplay of our Os and our OS with its OntoBot, OntoScope, and OntoBlender software components.]", and
    "We applied an advanced computer vision technology called DCNN background subtraction to segment the moving target from a video sequence. This algorithm learns environment information from the videos, and automatically recognizes the moving targets, such as human body, pets, and vehicles. Then the [3D and 360°] App will put all the segmented foregrounds with a clean background for the final [...] animation. [See the comment made to the quote before. Also note that we have here the field of Cognitive Vision (CV or CogV) utilized for object recognition and more functionalities of our OntoBlender (see also the sections VisualizationVisualization and of the webpage Links to Software of the website of OntoLinux).]".
    Even the name of the company has been copied from our website of OntoLinux, which is merely a portmanteau of the words evolution and emotion, which again reflects the integration of the

  • field of Evolutionary Computing (EC) and our Evolutionary operating system (Evoos) discussed in the The Proposal and
  • Emotion Machine Architecture based on our Evoos and referenced in the section Integrating Architecture of the webpage Overview,

    as can be seen easily.

    Obviously, we have here at least a reproduction of our

  • Ontologic System (OS), including
    • SoftBionics (SB)
      • Artificial Intelligence (AI),
      • Machine Learning (ML),
      • Computer Vision (CV),
      • Simultaneous Localization And Mapping (SLAM) system, and
      • Cognitive Vision (CV or CogV),
    • MobileKinetic,
    • OntoBot,
    • OntoScope
      • Augmented Reality (AR) and
      • Virtual Reality (VR),

      and

    • OntoBlender,

    and

  • Ontoscope (Os) in a handheld or palm computer version, including

    We also have the performance of our

  • Ontologic Applications and Ontologic Services (OAOS), including
    • cloud computing,
    • streaming,
    • etc..

    This might sum up to a final fee in the range of

  • 1 × 10.00 USD + 1 × 5.00 USD + 1 × 1.00 USD = 16.00 USD to
  • 1 × 12.50 USD + 1 × 7.50 USD + 1 × 1.00 USD = 21.00 USD.

    In addition,

  • 5% share of the overall revenue generated with the performance.
    The share will be estimated on the basis of cloud computing services if provided for free.

    Even crowdfunding platform providers and manufacturers from the P.R.China have to pay our royalties to find harmony and become happy.

    If companies like this one refuse to

  • comply with the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR) and
  • pay the royalties,

    then we are allowed to demand the banning or blocking of the

  • deployment of its app(s) in app stores, and
  • execution of its app(s) and service(s)
    • on devices,
    • in data centers,
    • in the Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), including
      • streaming on social network and video platforms,
      • entertaining on online game platforms, and
      • sharing on 2D and 3D image and model platforms.


    12.August.2018

    02:22 and 14:45 UTC+2
    SOPR #132

    *** Work in progress - better explanation and wording ***
    We have thought about the following topics:

  • platform vs. infrastructure,
  • distributed and Mediated Reality systems,
  • Augmented Reality systems, applications, and services, and
  • legal matters.

    Platform vs. infrastructure
    In relation with the discussions and investigations about the

  • plagiarism of parts of our Ontologic System (OS) with its Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) (see the Investigations::Multimedia of the 31st of January 2013),
  • Fault-Tolerant, Reliable, and Trustworthy Distributed Systems (FTRTDSs), such as for example

    and

  • integration of Distributed Systems (DSs) and Mediated Reality Environments (MedREs) (see the related section below)

    we noted that we are talking about grid computing and cloud computing vs. Peer-to-Peer (P2P) computing in the sense of centralized platform vs. decentralized infrastructure as well. This suggests a simple rule of thumb to separate or draw the white, yellow, or red line in respect to the management and control respectively permission and prohibition by our Society for Ontological Performance and Reproduction (SOPR) along these differentiations and characterizations.
    {But cloud computing platforms also centralize systems, applications, and services of P2P computing platforms for example by providing Platform as as Service (PaaS).}

    Distributed and Mediated Reality systems
    We observed several activities of already existing companies and start-ups in the integrated field of Mediated Reality (MedR) and cloud computing systems, specifically in relation with

  • location-based, persistent, and/or shared or multi-user Augmented Reality Environments (AREs), and
  • devices featuring object recognition (see Cognitive Vision (CV or CogV)) and environmental understanding respectively context(ual), situation(al), temporal, spatial, spatial context(ual), location, and spatial self-awareness (see Cognitive Agent System (CAS)).

    According to the Articles of Association (AoA) and the Terms of Service (ToS) of our Society for Ontological Performance and Reproduction (SOPR)

  • providing cloud computing services is permitted but
  • providing foundational infrastructure and basic overall standards in relation with our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) is prohibited without management and control by our SOPR, specifically combinations or integrations of MedR cloud computing platforms with blockchain platforms (see also the issues #121 of the 29th of May 2018, #129 of the 23rd of July 2018, #130 of the 29th of July 2018, and #131 of the 1st of August 2018).
    Moreover, as we already said in former issues, the
  • basis for collaboration already are our ON, OW, and OV managed on the ground of the Articles of Association (AoA) and the Terms of Service (ToS) as well as the guidelines and the interfaces of our Society for Ontological Performance and Reproduction (SOPR) and
  • standard already is our SOPR as well (see also the issues SOPR #15 of the 24th of September 2017, #61 of the 28th of November 2017, #68 of the 3rd of December 2017, #70 of the 5th of December 2017, and #121 of the 29th of May 2018).
    Or said in other words, we expect that all these MedR cloud computing platforms will provide interfaces, so that they can be unified and united under the umbrella of the SOPR (see also the related OntoLix and OntoLinux Further steps of the 23rd of September 2017).

    Augmented Reality systems, applications, and services
    We would like to repeat once again that companies in fields, like for example

  • online shopping,
  • fashion,
  • cosmetics,

    are not SOPR agnostic. Indeed, their systems, applications, and services are not merely based on for example an AR mirror anymore but they have integrated

  • in general
    • SoftBionics (SB),
  • in particular
    • recommender systems,
    • conversational dialog systems, and
    • agent-based systems,

    and

  • other features of our OS

    as well. In addition, their systems in the data center(s), specifically the ones used to process big user data for themselves, and their applications and services are also based on parts of our OS.

    Legal matters
    Our copyright in relation with the iconic works of art titled Ontologic System and Ontoscope, and created by C.S. has been confirmed for another time with the activities in the integrated field of MedR and cloud computing respectively the related part of our ON, OW, and OV, and we do not expect anymore that there is still any incompetent entity that claims that our original and unique ontologic works are not sufficiently creative, original, unique, and substantial, or were foreseeable and expected.

    The problem with the open source licensing of for example the Linux kernel based operating system Android of the company Google used by the Open Handset Alliance (OHA), which features for example

  • assistant with conversational dialog system based on SoftBionics (SB)
    • Artificial Intelligence (AI),
    • Machine Learning (ML),
    • Computer Vision (CV),
    • Cognitive Software Agent System (CSAS),
    • Cognitive Vision (CV or CogV),
    • etc.,
  • multimodal features, and
  • Mediated Reality (MedR)
    • Augmented Reality (AR),
    • Augmented Virtuality (AV),
    • Virtual Reality (VR), and
    • Mixed Reality (MR),

    is still not solved.

    We find it very exciting to see how the whole OS is materializing and lifting up.


    14.August.2018
    OntoLix and OntoLinux Website update
    Somehow we have not listed the points CarMapCloud and SpaceshipMapCloud, as well as MobileMapCloud, WearableMapCloud, and RobotMapCloud on the webpage of the OntoWeb software component. This has been made up today.

    Preliminary investigation of Linux Foundation and Scylladb continued
    We noticed the statement "to avoid a guaranteed context switch on every wakeup we trust keyed wakeups", but we were not able to interpret the usage of the term "trust" completely.
    Today, we found in an online encyclopedia the following explanation in relation with capability-based security: "A capability (known in some systems as a key) [...]". The explanation also makes clear that "[c]apabilities as discussed in this article should not be confused with POSIX 1e/2c "Capabilities". The latter are coarse-grained privileges that cannot be transferred between processes."

    It seems to be that the responsible entities have not copied our exception-less system call mechanism but used a capability-based mechanism in relation with an asynchronous system process.

    This leads us to the capability-based security extension called Capsicum and developed at the University of Cambridge (see the related note below).

    Preliminary investigation of Cambridge Capsicum started
    We noticed that the Capsicum capability-based security extension for the Linux kernel and the BSD Unix kernel developed at the University of Cambridge is sponsored by the company Google and directly knew that this activity of Google is also connected with our original and unique works of art tilted Ontologic System and Ontoscope, and created by C.S., as usual.
    Today, we found in an online encyclopedia the following explanation in relation with capability-based security: "Capabilities as discussed in this article should not be confused with POSIX 1e/2c "Capabilities". The latter are coarse-grained privileges that cannot be transferred between processes. [...] POSIX draft 1003.1e specifies a concept of permissions called "capabilities". However, POSIX capabilities differ from capabilities in this article - POSIX capability is not associated with any object; a process having CAP_NET_BIND_SERVICE capability can listen on any TCP port under 1024. In contrast, Capsicum capabilities on FreeBSD and Linux hybridize a true capability-system model with the UNIX design and POSIX API. Capsicum capabilities are a refined form of file descriptor, a delegable right between processes and additional object types beyond classic POSIX, such as processes, can be referenced via capabilities. In Capsicum capability mode, processes are unable to utilize global namespaces (such as the filesystem namespace) to look up objects, and must instead inherit or be delegated them."

    Now, we or better said the developers of Capsicum and Google have a(nother) problem, because this is an evidence that provides a causal link with our Ontologic System, and proper capability-based security in Linux kernel and Unix based operating systems and said hybridization are original and unique features of our integrating Ontologic System Architecture (OSA), as well as sufficiently creative and significant to claim for the copyright and other rights (see for example the Investigations::Car #195 of the 21st of November 2009 or an older information to find out that OntoLix is based on UNIX or a a derivate of UNIX indeed).

    For sure, this has further legal implications, specifically in relation with secure cloud computing systems and container-based systems.


    15.August.2018

    08:22 UTC+2
    SOPR #133

    *** Work in progress - better ordering and wording, some details might be missing ***
    The topics of this issue are:

  • distributed and Mediated Reality (MedR) systems,
  • new members, and
  • legal matters and recommendations.

    Distributed and Mediated Reality (MedR) systems
    In general, with the creation of our original and unique, iconic Ontologic System and Ontoscope we even

  • created a vast range of new functionalities with the OntoBot and OntoScope software components of our OS,
  • created a tool for the creation and management of systems, applications, and services, and also contents of MedR with the OntoBlender software component of our OS,
  • integrated these software components with our integrating Ontologic System Architecture (OSA), as well as
  • a device or instrument with our Ontoscope, that eventually makes for example Computer Vision (CV)-enabled multi-user MedR experiences a reality, and even
  • definied the path for mass adaption of Mediated Reality (MedR) and the spatial and temporal Internet, as well as the successors of the Internet and the World Wide Web (WWW) with our Ontologic Net (ON), our related Ontologic Web (OW), and our Ontologic uniVerse (OV).

    The Augmented Reality cloud computing (AR cloud) paradigm, as well as persistent and/or multi-user or shared Augmented Reality (AR) experiences without or with environmental understanding are main areas of activities actually where the related part of our original and unique Ontologic System are implemented (see also the issue #132 of the 12th of August 2018).

    In former issues,

    • cloud computing services,
    • Ontologic Net Services (ONS), and
    • Ontologic Web Services (OWS)
      • geographic information services,
    • mobility services,
    • Ontologic Applications
      • SoftBionics (SB)
        • Artificial Intelligence (AI),
        • Machine Learning (ML),
        • Computer Vision (CV),
        • Cognitive Vision (CV or CogV),
        • Cognitive Software Agent System (CSAS),
        • etc.,
      • Augmented Reality (AR)
        • navigation (overlays),
        • multi-user trip planning,
        • real-time asset tracking for logistics companies and online shoppers,
        • multiplayer gaming,
        • interactive city tours,
        • enhancements for mobility services
          • ridesharing services,
        • etc.,

    have been shown to be Ontologic Applications and Ontologic Services (OAOS) as well, though we have to note that we provide such OAOS already with our Ontologic Web (OW), that is based on the functionalities of Multi Global Positioning System (MultiGPS), MapCloud Computing, and also OntoGlobe and OntoEarth (see also Active Positioning of Style of Speed), and the location-based services Google Street View and Here provide also basic functions of GeoInformation Systems (GISs), and smart city and mobility applications, which are even based in parts on the functionalities of our Ontologic System (OS) and Ontologic Web (OW) (see also the Ontonics Further steps of the 6th of October 2017).

    In the issue #132 of the 12th of August 2018 we repeated our expectation that distributed and Mediated Reality (MedR) systems, such as AR cloud computing platforms, will have interfaces implying that they are based on the OntoNet, OntoWeb, and OntoVerse software components of our Ontologic System (OS).

    New members
    We have new members of our Society for Ontological Performance and Reproduction (SOPR):

  • Mapbox

    "Mapbox is a provider of custom online maps for websites and applications. [...] The data are taken both from open data sources, such as [the] OpenStreetMap (OSM) and [the National Aeronautics and Space Administration (]NASA[)], and from proprietary data sources, such as DigitalGlobe.[...] Mapbox uses data from tracks of its clients' users [...] to identify likely missing data in OpenStreetMap with automatic methods, then manually applies the fixes or reports the issue to OSM contributors."
    Mapbox AR provides developers with locations, business information, and live location data aggregated from users. Via the Maps SDK for [another partial clone of our OS], the platform also enables multi-user AR experiences where users can interact in real-time.
    In February 2018, location service provider Mapbox is giving developers a means for building location-based AR apps and multi-user experiences with its new Mapbox AR toolkit.
    Crowdsourcing or collaboratively creating 2D and 3D data for maps, specifically on the basis of data captured with an Ontoscope or another device operated with our OS respectively by a peer in our OS, is an original and unique project of C.S. (see also the OntoLinux Further steps of the 26th of November 2010).
    The integration of the OSM with the Virtual Object System (VOS) and hence with multi-user Augmented Reality (AR) is included in our OS with our OntoGlobe (3D) with the Ontologic Map (OntoMap) (2D) (see also the OntoLix and OntoLinux Further steps of the 23rd of September 2017).
    The company is funded by lead investor SoftBank and other investors, including venture-capital firms Foundry Group, DFJ Growth, DBL Partners, and Thrive Capital.

  • Niantic

    Niantic provides an AR cloud platform for multiplayer gaming and interaction with real world objects.
    The so-called Niantic Real World Platform for mobile apps enables not only multiplayer, cross-platform augmented reality experiences, but also facilitates environmental understanding for occlusion, or the ability for AR content to appear in front of or disappear behind objects in the real world.
    Using a smartphone's camera and Computer Vision (CV), or being precise an Ontoscope, the Niantic Real World Platform can recognize landmarks and objects in the environment and track changes over time, and uses Machine Learning (ML) to classify objects. This enables apps to present content that blends naturally and logically into the environment.
    For multiplayer, the platform utilizes low-latency networking between users, regardless of mobile OS, to ensure that experiences are in sync.
    Not surprisingly, the AR platform is also described as an operating system that bridges the digital and the physical worlds with which the company is we are pushing the boundaries of geospatial technology, and creating a complementary, interactive real-world layer.

    Niantic is planning to chart AR maps with crowdsourcing help from gamers respectively users similar to Mapbox, but with crowdsourcing with visual 3D data and/or AR features of our OS.
    The company's AR maps will be built from data captured by players' cameras as they play Niantic's location-based AR games
    A multiplayer AR platform based on CV will serve as the foundation for the map. Niantic noted at the time that it would offer the multiplayer capabilities as a service to other developers.
    persistent, shared AR as part of the Niantic real-world application platform
    6D.ai is taking a similar approach to Niantic, while Blue Vision and Ubiquity6 are also acting in the area of persistent multi-user, shared, or collaborative AR platforms and experiences.
    Crowdsourcíng or collaboratively creating 2D and 3D data for maps, specifically on the basis of data captured with an Ontoscope or another device operated with our OS respectively by a peer in our OS, is an original and unique project of C.S. (see also the OntoLinux Further steps of the 26th of November 2010).
    Obviously, the platform has been copied from our original and unique, iconic OS and requires the utilization of a device based on our Ontoscope.

  • 6D.ai

    6D.ai, is taking a similar approach to Niantic us. The company's Our technology can capture a 3D mesh with just a smartphone's an Ontoscope camera and would be capable of running in the background to collect environmental data as users go about their daily activities.
    The company checks off the boxes for multiplayer, cross-platform compatibility, persistence, and occlusion meshing.
    Its AR cloud with a cryptocurrency is not considered for licensing anymore due to the regulation introduced with the issue #131 of the 1st of August 2018.
    Crowdsourcíng or collaboratively creating 2D and 3D data for maps, specifically on the basis of data captured with an Ontoscope or another device operated with our OS respectively by a peer in our OS, is an original and unique project of C.S. (see also the OntoLinux Further steps of the 26th of November 2010).

  • Blue Vision

    "The startup [...] began offering early access to the SDK for its Blue Vision AR Cloud platform. Blue Vision's take on the AR cloud relies on [C]omputer [V]ision [(CV)]. The company is planning to store frequently-updated AR maps of cities on its servers, and then apps using the service can obtain location information with "centimeter precision" based on visual data from the device's camera.
    With such a high degree of location accuracy, app developers could conceivably anchor content that multiple users would be able to see in the same place. This would make multi-player gaming, AR navigation, and shared social AR experiences a lot easier."
    Blue Vision joins the location service provider Mapbox and the startup 6D.ai in offering services working to facilitate multi-user AR experiences. Likewise, [...] Niantic also plans on offering a multi-player service for gaming [...].
    The company is funded by the lead investor GV (formerly known as Google Ventures) among others.
    The integration of MedR and cloud computing is an original and unique, essential element of our OS. The same holds for our Ontoscope operated by an OS.

  • Ubiquity6

    Leveraging Computer Vision (CV) and the sensors embedded on the common smartphone Ontoscope, the Ubiquity6 platform gives apps the capability to map and understand environments, and anchor persistent content that interacts intelligently with its environment. Developers will be able to use these aspects to help facilitate multiplayer AR gaming and multi-user social sharing AR experiences.
    The company was building a new kind of camera developing an Ontoscope that doesn't let you just capture the world, but a camera totally new device or instrument that lets users edit reality together in physical spaces that matter them.
    Blue Vision is an AR cloud startup that intends to facilitate similar shared experiences.
    The company is funded by the Artificial Intelligence (AI) investment arm of the company Google Gradient Ventures among the led investors Benchmark and Index Ventures, and also First Round Capital, Kleiner Perkins, LDVP, A+E ,and WndrCo.
    The overall concept and the integration of MedR and cloud computing are original and unique, essential elements of our OS. The same holds for our Ontoscope operated by an OS.

  • Yelp

    The company Yelp developed an AR application to show travelers information about hotels for example.

  • Magic Leap

    Its Head-Mounted Display (HMD) device is a head-worn Ontoscope. Magic Leap is also developing an operating system, which is based on our Ontologic System, obviously.
    The company is funded by Google and others.

    Legal matters and recommendations
    We recall once again that companies

  • are only allowed to provide their systems, applications, and services, which are based on our Intellectual Properties (IPs), which again can only be licensed by their creator C.S. in accordance with the international copyright law, under another license, which is accredited by our SOPR, and
  • have to inform their customers about the royalties of our SOPR and even to support our SOPR in collecting the royalties being due, correspondingly.

    If companies like the once listed above refuse to

  • comply with the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM) of our Society for Ontological Performance and Reproduction (SOPR) and
  • pay the royalties,

    then we are allowed to demand the banning or blocking of the

  • deployment of its app(s) in app stores, and
  • execution of its app(s) and service(s)
    • on devices,
    • in data centers,
    • in the Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), including
      • streaming on social network and video platforms,
      • entertaining on online game platforms, and
      • sharing on 2D and 3D image and model platforms.

    We also recall that the LM applies for the investors including venture capital firms as well (see the issue #123 of the 29th of June 2018).

    Last but not least, we also highly recommend to

  • read the Comment of the Day and see the Pictures of the Day of the 25th of July 2013 and the related comment with the judgement of Judge Birss (once again),
  • learn the
    • difference between an idea and an expression of an idea,
    • meaning when a work of art is copyrighted,
    • nature of unfair business practice or unfair competition, as well as
    • implications of an infringement of a copyright and other rights,

    and

  • factor our royalties in their decision making.

    08:22 UTC+2
    Oh, what ...?

    The following informations round the list of new members of our Society for Ontological Performance and Reproduction (SOPR) up (see the issue SOPR #133 of today):

  • Google offers the Google Maps API for location-based AR games and is bringing multiplayer experiences to Android and iOS via its shared AR experience platform Cloud Anchors based on its Augmented Reality (AR) Software Development Kit (SDK) ARCore.
  • Apple showed a demo of an AR application comprising a Lego model where users share AR experiences on different devices and announced that it will bring multi-user support, persistent content, and object recognition to its ARSDK ARKit 2.0 this fall 2018. It is also collaborating with at least the manufacturer of smartglasses, data glasses, Optical Head-Mounted Displays (OHMDs), or see-through HMDs, Lumus, and financing the related research and development of a glass manufacturer, which has to be viewed as alibis for the mimicking of C.S. and our corporation.
  • Amazon has copied essential parts of our OntoBlender software component, including parts of its integration with other components of our Ontologic System (OS), such as the
    • OntoBot (Natural Language Processing (NLP), speech recognition, natural-language understanding, text-to-speech, and conversational chatbot and agent),
    • OntoScope, and
    • OntoWeb (cloud computing and Speech Cloud), as well as
    • OntoGlobe (3D) with the OntoMap (2D) (location services based on e.g. the OSM; see also the note about Mapbox above),

    and provides related services with its AR cloud computing platform.


    16.August.2018

    20:27 UTC+2
    SOPR #134

    *** Proof-reading mode ***

    Solutions that are based on

  • OntoBot software component based on
    • SoftBionics (SB), including
      • Artificial Intelligence (AI),
      • Machine Learning (ML),
      • Computer Vision (CV), including
        • image recognition,
        • etc.,
      • Simultaneous Localization And Mapping (SLAM) system,
      • Cognitive Vision (CV or CogV), including
        • object recognition,
        • etc.,
      • Cognitive Software Agent System (CSAS), including
        • context(ual), situation(al), temporal, spatial, spatial (contextual), location, and spatial self-awareness,
        • etc.,

      and

    • robotics
      • Cognitive robot (Cbot or Cogbot), including
        • immobile robot (immobot),
      • robotic mapping and navigation, including
        • SLAM
      • CogV,
      • etc.,
  • OntoScope software component based on
    • Augmented Reality (AR),
    • Virtual Reality (VR), and
    • Mixed Reality (MR),
  • MobileKinetic,
  • Multi Global Positioning System (MultiGPS),
  • MapCloud Computing, and also
  • OntoGlobe and
  • OntoEarth software components, as well as
  • characteristics and features of our Ontoscope, including
    • immobot,
    • cognitive robot head,
    • cyborg head,
    • cybernetic self-image,
    • cybernetic extension of a user, and
    • access device for respectively in our Ontologic System,

    like for example the

  • Augmented Reality Markup Language (ARML) and similar formats, and
  • multi-user or shared AR experience platforms

    are interesting but we must have

  • anchors or Time IDs in a consistent, physical time and anchors or Space IDs in a consistent, physical location with variable persistence ranging
    • from short-lived
    • over stored
    • to long-lived or archived,

    and

  • common reality anchors used as common reality reference frames for the whole infrastructure of the SOPR being available for all members.

    Indeed, a Time ID and a Space ID belong to the

  • foundations of the Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) (see also the webpage about the Caliber/Calibre on the website of our OS OntoLinux) and
  • infrastructure of our SOPR.

    Correspondingly, we have the opinion that these reference data, which are captured with an Ontoscope or a device operated with our Ontologic System respectively by a peer in our OS, belong to the data that members of the SOPR should provide for the SOPR platform and its unifying and uniting infrastructure (see the related implementations of the companies Mapbox, Niantic, etc.) in accordance with the AoA and the ToS (see the section Duties of Members in the issue #35 of the 24th of October 2017 and also the issues #132 of the 12th of August 2018 and #133 of the 15th of August 2018 (yesterday)).

    20:27 UTC+2
    OntoLix and OntoLinux Further steps

    *** Work in progress - better order and wording ***
    Form an introduction to the Virtual Object System (VOS) we got the information that "[n]early all other features of VOS derive from this object model. Although we call VOS a "hierarchical" distributed object system, object interlinking is actually a directed graph. However, it is at its most useful when used to group objects in "contains" or "describes" relationships."
    In accordance with our Ontology-Oriented (OO 2) paradigm an

  • Object Type Definition (OTD) is described by one or more ontologies,
  • object class is declared or defined by one or more ontologies in addition to the common elements of a class, and
  • object interlinking, like a directed graph of the VOS, is an object itself and therefore declared or defined by one or more ontologies as well.

    The one or more ontologies are included in its class declaration and/or in a referenced Semantic World Wide Web (SWWW) file written in the eXtensible Markup Language (XML) format of the Resource Description Framework (RDF) and/or the Web Ontology Language (OWL).
    XML grammars and Domain Specific Languages (DSLs) can be

  • used as well if they equal the characteristics of an ontology or
  • transformed into an ontology.
    Our Zero Ontology can be used as part of the Object-Oriented (OO 1) paradigm.

    This approach results in the

  • eXtensible Virtual Object System (XVOS) or Virtual Ontology-Oriented System (VOOS), and also
  • Mediated Objects (MedOs), which are classified as a
    • (real) object - whole object at one location or in one space,
    • augmented (real) object - (real) object augmented with (parts of) virtual object,
    • augmented virtual object - virtual object augmented with (parts of) (real) object, and
    • virtual object - whole object at one or more distributed locations or in one or more distributed spaces,

    and

  • Mediated Containers (MedCons), which are classified in the same way as a MedO.

    As can be seen, an object or even a container on the one side and a graph of the XVOS on the other side can overlap or even be the same.

    The OntoNet software component of our Ontologic System (OS) already integrates the VOS as well as our XVOS or VOOS with the Information-Centric Networking (ICN), Content-Centric Networking (CCN), and Named Data Networking (NDN).

    Moreover, Distributed Systems (DSs), including platforms of the fields of Peer-to-Peer (P2P) computing, grid computing, and cloud computing, and also distributed ledgers based on the blockchain technique or a Directed Acyclic Graph (DAGs), as well as Mediated Reality Environments (MedREs), also use for example Topic Maps (TM) (in e.g. RDF) and ontologies for semantic web computing and semantic grid computing.

    In this relation, we

  • increased the amount of Spherical Centroidal Voronoi Tessellation (SCVT) cells respectively quasi-uniform hexagons of the one fixed Voronoi mesh in 2D by a factor of 100 to around 15 trillion SCVT cells each with a diameter of around 0.393 inches/1 centimeter,
  • introduced the Time ID and the Space ID, which
    • have the same status and function like a Uniform Resource Locator (URL) (e.g. domain name, web address, or IP address) (also compare with the example of the VOS) and
    • identify reference data and their features or properties captured with an Ontoscope or another device operated with our OS respectively by a peer in our OS, such as
      • images,
      • models, including for example
        • scanned horizontal or vertical surfaces,
        • contrast points, color and/or lighting changes,
        • point clouds, and
        • meshes,
      • sounds or audio records,
      • Fourier series and transforms,
      • and so on,
  • set the anchors of the Time ID and the Space ID, such as
    • time (stamp),
    • geometry,
    • trackable,
    • reference frame for markerless instant tracking,
    • and so on,

    and

  • introduced the OntoScope (Reality) Lens, which allows
    • utilizing,
    • performing, and
    • experencing

    of Time IDs and Space IDs in differentiated and customized ways.

    See also the

  • Virtual Reality Markup Language (VRML),
  • Augmented Reality Markup Language (ARML),
  • Ontologic Web Further steps of the 6th of July 2017,
  • OntoLix and OntoLinux Further steps of the 23rd of September 2017, and
  • Ontonics Further steps of the 6th of October 2017.

    Furthermore, the management structure and the IDentity and Access Management System (IDAMS) of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV) with its rings and ID spaces are also intergrated with the Time IDs and Space IDs on the basis of a hypergraph.
    As can be seen, everything fits together seamlessly, perfectly, and efficiently even the

  • temporal indication and spatial position of an MedO and a MedCon (see the related example given in the referenced introduction of the VOS) and
  • safety and security features.

    Ontoscope Further steps
    We reviewed one of our device series and worked on the designs of new models.


    17.August.2018
    Clarification
    In relation with our original and unique works of art titled Ontologic System (OS) and Ontoscope (Os), and created by C.S., we noted that especially in the field of Mediated Reality (MedR) and its subfields of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) some confusion about the true origin exists with our

  • integration of techniques, like for example
    • 2D and 3D scanning,
    • Computer Vision (CV)
      • image recognition and
      • image tracking
    • Cognitive Vision (CV or CogV)
      • object recognition and
      • object tracking,
    • markerless instant tracking,
    • Simultaneous Localization And Mapping (SLAM) system,
    • robotic mapping and navigation
      • SLAM,
    • workflow management,
    • content editing, and
    • 2D and 3D rendering,

    as well as

  • integration of systems, like for example

    with our Ontologic System Architecture (OSA) and

  • realization of said integrations of techniques and systems with our

    that are also copied and used by

  • manufacturers of Software Development Kits (SDKs) and other related software,
  • manufacturers of Ontoscopes in the smartcamera, smartglasses, smartphone, and other variants, and other related hardware, and
  • providers of location-based services, MedR cloud computing platforms, and MedR experiences, and
  • developers of the Augmented Reality Markup Language (ARML) 2.0 developed since the year 2011 and also introduced in the year 2012 and similar formats

    like for example the company Wikitude at first followed by the companies Google, Apple, Microsoft, Mapbox, Niantic, Facebook, Lenovo and Google, and so on, even as failed attempts to circumvent the various legal protections of our iconic ontologic works in the most cases .
    We come back to this issue when we continue with the discussion of our Ontoscope in related clarifications, SOPR issues and other publications (see Chapter 5 of The Proposal, the section Robotics of the webpage Links to Hardware of the website of OntoLinux, the Announcement Ontoscope 2.0 of the 10th of August 2008 the Ontoscope Further steps of the 11th of July 2009 (the acronym AI was used in the common sense of Artificial Intelligence, which comprises Machine Learning (ML), Computer Vision (CV), etc., and the correct acronym is SB, which stands for SoftBionics), and the Investigations::Multimedia of the 08th of September 2009).

    09:00 and 19:00 UTC+2
    SOPR #135

    *** Work in progress ***
    The topics of this issue are:

  • new members and
  • legal matter.

    New members

  • Wikitude (formerly Mobilizy)

    In 2012, the company restructured its proposition by launching the Wikitude SDK, which is a development framework and includes image recognition and tracking, 3D model rendering, video overlay, geolocation techniques, location-based Augmented Reality (AR), and Simultaneous Localization And Mapping (SLAM) technology, which again enables object recognition and tracking, as well as markerless instant tracking.
    The Computer Vision (CV) techniques (e.g. image recognition and tracking) that allow the image tracker to trigger AR functionality within an application.
    For location-based AR, the position of objects on the screen of a mobile device is calculated using the user's position, which is given by the Global Positioning System (GPS) or Wireless Local Area Network (WLAN), and the direction in which the user is facing, which is given by the compass and the accelerometer respectively the inertial sensors and the Inertial Measurement Unit (IMU).
    In 2017, Wikitude launched its variant of the SLAM technology with its SDK 6. Markerless instant tracking is its first feature using SLAM, allows developers to easily map environments and display AR content without the need for target images (markers). Object recognition is the latest addition based on SLAM, with the launch of SDK 7. The idea behind object tracking is very similar to image tracking, but instead of recognizing two-dimensional images and planar surfaces the object tracker can work with three-dimensional structures and objects (tools, toys, machinery...).
    Content can be added by a web interface, by KML, and ARML (ARML Specification 1.0 for Wikitude's World Browser in 2009). In addition, web services are available to register the delivery of dynamic data.

    In 2017, Wikitude joined with Lenovo and hence with Google to develop a cloud computing platform for delivering industrial-focused AR content. The Augmented Human Cloud will combine Wikitude's image recognition and markerless tracking technology with remote video, workflow, and content editing and authoring, and deep learning recognition applications from Lenovo New Vision (LNV) based on our Ontologic System and our Ontoscope as well. Combining world-leading AR and AI technologies and leveraging the core competencies and market expertise of two proven innovators us enables us to make a real difference in realizing our vision of the Industry 4.0 as part of our fields of CPS 2.0, IoT 2.0, and NES 2.0 on the basis of our integrating Ontologic System Architecture (OSA).
    The Augmented Human Cloud takes aim at similar services offered by the likes of PTC, Scope AR, and Upskill, all of which enable companies to use AR for remote support functions.

    Or said in other words, its World Browser was legal, the rest was merely copied from our Ontologic System and our Ontoscope, and also depends on our works of art when utilized.

  • Scope AR

    Scope AR revealed a major new update that will add markerless tracking for their our remote assistance application, Remote AR, on standard devices.
    This update, powered by real-time image tracking solution Wikitude of us, presents a major step forward in the current line of image-based tracking solutions.
    Remote AR allows remote support experts to follow along in real-time with the camera view of a mobile device or smartglasses used by a field technician or customer. The remote expert is able to annotate a field user's view with whiteboard drawings or other instructional content (in a collaborative way).
    With the spatial mapping solutions of Microsoft HoloLens and the Occipital Structure Sensor us with our original and unique Ontoscope, anchoring data to a specific place is becoming more and more baseline functionality for these types of technology. That said, most of these types of features require special sensors to accomplish markerless tracking. [...] along support for Google's Tango sensor also copied from our iconic ontologic works, adding real-world mapping capabilities that bridge the gap between the Structure Sensor, or the sensor array of the HoloLens Ontoscope variants, and the ability to anchor annotations in the world.

  • Upskill

    Its Skylight AR platform is a productivity suite that displays guided workflows via smartglasses, facilitates remote collaboration, and enables experts to easily author content.

    The company is funded by the lead investors Boeing HorizonX and GE Ventures, who also participated in the latest round with Cisco and Accenture, along with New Enterprise Associates (NEA) and others.

    [see Wkitude, Scope AR, and PTC]

    Integration of the different workflows of for example the dialog system LARRY and the 3D modeler Blender as well as the related process documentation of Total Quality Management (TQM) systems, and also CPS 2.0, IoT 2.0, and NES 2.0, including Industry 4.0 are original and unique, essential elements of our OS.

  • PTC

    See the companies Wkitude, Scope AR, and Upskill.

  • NedSense LOFT

    Large companies with incredibly vast 2D catalogs are unable to transform their libraries into dynamic 3D content that can then be leveraged in AR and VR environments due to high costs of traditional software methods. roOomy by NedSense on the basis of its so-called customer experience engine Loft offers these businesses the ability to transform their catalogs at a scale and efficiency unavailable until today. They can then tap into the most lucrative home furnishings revenue stream - homebuyers - through the utilization of their 3D content in roOomy Virtual Stagings offered for the real estate industry.

    Integration of AR, VR, and MR in systems, applications, and services, as well as their integration with a mobile device are original and unique elements of our OS as is our OntoBlender. The ARSDKs ARCore of Google and ARKit of Apple are based in parts on our OntoScope software and our Ontoscope hardware.

  • Accenture

    Accenture has struck an alliance with Upskill and is integrating the Upskill's Skylight software into its Accenture Digital Distribution Solution, which is a platform that enables enterprises to push any type of content to any device in its fleet.

  • Fidelity Investments

    Fidelity Labs, the research and development division of Fidelity Investments, served as one of Amazon Sumerian's beta customers. The team built a prototype VR assistant, named Cora, who can converse with customers and perhaps one day assist them with stock quotes. Considering the nature of the platform, Fidelity Labs could conceivably clear out the VR backgrounds to make this an AR experience in the future.

    Integration of AR, VR, and MR in systems, applications, and services, as well as their integration with Cognitive Agent Systems (CASs) are original and unique elements of our OS as is our OntoBlender, which Amazon copied for Sumerian.

  • Estée Lauder

    Makeup giant Estée Lauder's latest project incorporates augmented reality to help customers test out their lipsticks without even getting out of bed. The company collaborated with Modiface - a group that creates AR services for the beauty industry - to produce a new Facebook Messenger chatbot that uses customers' cameras to let them virtually try on the brand's Pure Color Lipsticks.
    [...]
    Once you begin the quiz, the chatbot will ask you what finish you like on your lipstick, what occasion you're wearing it for, and what group of shades you're looking to try out.
    The bot then suggests [or recommends] what colors you should try on. You can then select the color you like the best and then see how it looks on yourself using your device's camera (see also Sephora in the issue #28 of the 10th of October 2017)

  • L'Oréal

    Already one of the leaders in augmented reality for cosmetics, L'Oréal is extending the reach of its ModiFace virtual try-on platform through a partnership with Facebook.
    The cosmetics company will be integrating the augmented reality experiences built with Modiface into Facebook's AR camera (see also Snap(chat)), enabling customers to try on virtual shades of makeup.
    Eventually, L'Oréal has acquired the computer vision AR company Modiface in the year 2018.

  • Lowe's Companies

    After previously releasing apps for in-store guidance based on the ARSDKs of Apple and Google, Lowe's has updated its main shopping app with an AR product preview feature. Customers can now view a selection of furniture and appliances in their home or cooking grills on their back porch.

  • Jet.com

    The shopping app of Jet.com gives customers the opportunity to preview a selection of electronics, including TVs, laptops, speakers, monitors, coffee machines, and VR headsets, with multiple products viewable at the same time. The app also lets shoppers place virtual drones on their desks and then fly them around their space. Users can then click the virtual items to add them to their shopping cart.

  • Sotheby's International Realty

    Sotheby's International Realty introduce the so-called Curate AR real estate marketing app, that operates similarly to furniture visualization apps like IKEA Place and Houzz, that were introduced as new members in the issue #72 of the 5th of December 2017. The app allows users to visualize full interior designs, such as living rooms, dining rooms, offices, and bedroom suites, and quickly swap out designs and capture screenshots of the results.

    Legal matter
    Once again, we give the reminder to every affected entity that they have to inform its partners and customers about the legal situation, and the Articles of Association (AoA) and the Terms of Service (ToS) with the License Model (LM).

    09:00 and 19:00 UTC+2
    Oh, what ...?

    The following informations round the list of new members of our Society for Ontological Performance and Reproduction (SOPR) up (see the issue SOPR #135 of today):

  • Facebook has made numerous efforts to monetize its Augmented Reality (AR) platform through marketing partnerships, including bringing AR ads to its News Feed, extending branded AR experiences to Facebook Messenger, and adding advertising-friendly capabilities such as image recognition. As we said in relation with other notes, the dependency of this company on our ontologic works has deepened significantly as well.
  • Amazon demonstrated how its AR View tool in the Amazon app works to allow users to place virtual items in their homes to try products before they buy them.
  • eBay updated its the AR features of its app to improve the selling experience as well for example by placing boxes around items to find the right size and calculating the shipping costs.
  • NASA added an AR app, which overlays rovers on the real view.

    Ontonics Further steps
    We developed two new variants of one of our technologies.

    We also thought about the possibilites that the adaption of a special production process makes possible.

    Ontoscope Further steps
    We reviewed one of our device series and worked on the design of a new model.

    We reviewed another one of our device series and updated the design of a model.

    intelliTablet Further steps
    We reviewed one of our device series and updated the design of a model.


    18.August.2018
    Ontonics Further steps
    We developed a different variant of one of our technologies and a related device.

    More by happenstance, we also developed new components, and new modules, systems, and devices based on them with outstanding performances.


    19.August.2018
    Ontonics Website update
    In the description of the project CarCloud/Car in the Cloud Computing we substituted the phrase "base for services" with the phrase "basis and blueprint for the more general services" to make the understanding of the statement and subject matter easier.

    Ontonics Further steps
    We developed a first new component and a first new module.

    We developed a second new component, a second new module, and a third new module.

    We developed a third new component and a fourth new module in two variants.

    We developed a fifth new module in two variants. Its first generation was already beyond our imagine but this second generation is totally beyond what we ever expected to do.


    20.August.2018
    Ontonics Further steps
    We improved older components and modules of a technology and developed new variants of them.
    This technology is an alternative for one of our other technologies and accordingly decisions have to be made for their applications.

    Furthermore, we adapted a component for the development of a new module which has a 100% higher efficiency than comparable modules.
    In a further step, we also integrated this component with the new components mentioned in the Further steps of the 19th of August 2018 resulting in even more advanced solutions.

    Ontoscope Further steps
    We reviewed one of our device series and updated the design of a model.


    21.August.2018
    Ontonics Further steps
    We improved an older foundational component and a related module of a technology.
    This technology is an alternative for one of our other technologies and accordingly decisions have to be made for their applications.

    19:36 and 22:52 UTC+2
    SOPR #136

    *** Work in progress - reality features, anchors, reference frames not clear enough ***
    We worked on the following topics:

  • Time ID and Space ID, and SOPR ledger,
  • common reality features and anchors,
  • legal matter, and
  • benefit program.

    Time ID and Space ID, and SOPR ledger
    The Time ID and the Space ID are

  • based on ontologies given in the format of the Resource Description Framework (RDF) and/or the Web Ontology Language (OWL), and suitable eXtensible Markup Language (XML) grammars and Domain Specific Languages (DSLs) (no fiddling around with GML XML schemas),
  • similar to an anchor, which
    • describes the
      • location of the physical object in the real world represented by a feature and
      • spatial relation between the physical and the virtual object,

      and

    • is defined
      • in an ontology for Augmented Reality,
      • the Augmented Reality Markup Language (ARML) with a related eXtensible Markup Language (XML) grammar,
      • by AR cloud anchors,
      • and so on,
  • fit with our Virtual Ontology-Oriented System (VOOS) (see the OntoLix and OntoLinux Further steps of the 16th of August 2018), and
  • stored in a specified
    • distributed ledger of our Society for Ontological Performance and Reproduction (SOPR) (abbreviated as SOPR ledger) based on our universal consensus respectively network of telescopes (see issues #129 of the 23rd of July 2018 and #131 of the 1st of August 2018),
    • interval, for example every month in the beginning, then every day, every hour, and so on as required, and
    • scope, granularity, detailedness, or resolution of the features (see issue #134 of the 16th of August 2018 and once again the OntoLix and OntoLinux Further steps of the 16th of August 2018 (same day)).

    The time stamp of the SOPR ledger can be added to a TimeID and a Space ID, which can have one or more TimeIDs.

    The ARML can be utilized, which is based on a object model and allows the direct integration with the Virtual Object System (VOS) (see once again the OntoLix and OntoLinux Further steps of the 16th of August 2018), but also every other format that provides reasonable means of transformations or castings.

    Common reality features and anchors
    There exist pro and contra arguments that the reality features and anchors used as reference frames and captured with devices

  • based on our Ontoscope,
  • operated in our Ontologic System, and
  • made likewise suitable for this task

    should be

  • processed by individual end users in a first step and then passed on to the SOPR and other members of the SOPR (e.g. providers) in a second step, which
    • would provide more anonymity and be in accordance with data privacy regulations and
    • could be correlated with the operation and utilization of the OntoScope (Reality) Lenses of Ontologic System Components (OSC), and Ontologic Applications and Ontologic Services (OAOS),
  • processed by the SOPR in a first step, specifically on the basis of (Managed) Peer-to-Peer ((M)P2P) computing systems and the Ontologic Net (ON) infrastructure servers, and then passed on to the members of the SOPR in a second step, specifically system, application, and service providers, which, or
    • would provide more anonymity as well and also support the aggregation of data for common use (e.g. infrastructure) and
    • could be correlated with the operation and utilization of the OntoScope (Reality) Lenses of OSC,

    and

  • processed by system, application, and service providers in a first step and then passed on to the SOPR and other members of the SOPR (e.g. end users) in a second step, which
    • would support the monetizing of big data and support the operation of ON infrastructure and
    • could be correlated with the operation and utilization of the OntoScope (Reality) Lenses of OAOS.

    In this relation, we are also trying to determine the

  • scope of the reality features and anchors or reality reference frames, that
    • on the one hand have to be provided by members of the SOPR to the servers of the Ontologic Net (ON) in accordance with the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR and
    • on the other hand could be variable in their granularity, detailedness, and resolution,

    and

  • must have sufficient quantity and quality, so that eventually
    • every OntoScope (Reality) Lens of a system, application, and service works with the overall data common for all SOPR members and
    • every user of an Ontoscope, a device operated with our Ontologic System, and any other suitable device has access to the related part of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV).

    Legal matter
    If for example a company demands the ban of its Ontoscope implementation, then we would do that ASAP in accordance with the AoA and the ToS of our SOPR.

    The same holds for AR cloud computing platforms if their providers refuse to comply with the AoA, the ToS, and the LM of our SOPR and pay the royalties.

    Members of the SOPR have to respect others' and our Intellectual Properties (IPs).

    Benefit program
    We are thinking about providing one or more of our superior Argumented Reality (AR) technologies to our members. Potentially, we would couple this with our AR, Mixed Reality (MR), or Mediated Reality (MedR) cloud computing platform or at least SOPR first and delievered to beneficiaries second.


    22.August.2018
    Clarification

    *** Proof-reading mode ***

    We added the terms Cognitive Vision (CV or CogV), Cognitive robot (Cbot or Cogbot), and immobile robot (immobot) to various messages and notes. In fact, we have not added new matter from the technical and legal points of view in this way, because

  • our Ontologic System (OS) includes SoftBionics (SB), Cognitive Agent Systems (CASs), and immobile robotics (immobotics),
  • SB includes the subfields of
    • Artificial Intelligence (AI),
    • Machine Learning (ML),
    • Computer Vision (CV),
    • Simultaneous Localization And Mapping (SLAM) system,
    • Cognitive Software Agent System (CSAS),
    • Cognitive Vision (CV or CogV),
    • etc.
  • ML is directly connected with CV,
  • CV is directly connected with
    • visual processing utilizing syntactic computing, e.g. ML,
    • image recognition, and
    • SLAM,
  • CogV is directly connect with
    • visual interpretation utilizing semantic computing, e.g. AI,
    • context-based vision, and
    • object recognition,
  • robotic mapping and navigation is directly connected with SLAM,
  • CSAS is classified as CAS,
  • immobot is classified as Cbot or Cogbot by scientists,
  • Cbot is classified as CAS,
  • CAS with CV implies CogV, and eventually
  • Ontoscope is also classified as CAS and Cbot.

    Also keep in mind that

  • the term immobile in the designation immobile robot refers to the lack of parts uitlized for locomotion and actuation but not to the lack of the possibility to be a mobile device, and
  • we made at least every mobile device that is operated by a part of our OS an immobot respectively an Ontologic roBot (OntoBot) (see also the Investigations::Multimedia, AI and KM of the 23rd of April 2017).

    Website update
    Yesterday and today, we updated the note SOPR is coming to you as well of the 11th of August 2018, issues SOPR #132 of the 12th of August 2018, SOPR #133 of the 15th of August 2018, and SOPR #134 of the 16th of August 2018, and the Clarification of the 17th of August 2018 by adding the terms Cognitive Vision (CV or CogV), Cognitive robot (Cbot or Cogbot), and immobile robot (immobot) in correspondence to the Clarification of today and therefore without adding new technical and legal matter to make their understanding easier.

    Ontonics Further steps
    What a next development and first success: Our OntoLab has achieved the next milestone with the first simulations of three foundational components of one of our superior technologies.
    Even better, the successful implementation of a related prototyp for mass production is now preprogrammed.

    Even more better, this is only the beginng and we have sketched the next generation of one of these technologies already, which is even more ... hmmm ... crazy, amazing, fascinating, or so.

    We also developed a new component, which drives one of these technologies to a new maximum. This is crazy stuff.


    23.August.2018
    Comment of the Day
    "Press freedom does not mean jester's license.", [A commentator, 19th of August 2018]
    "Pressefreiheit bedeutet nicht Narrenfreiheit.", [Ein Kommentator, 19. August 2018]

    Ontonics Further steps
    We noticed that we might have over-engineered the foundational components mentioned in the Further steps of the 21st and 22nd of August 2018. Indeed, one feature might not be needed at all for their foundational functionality but can still be used for advanced functionalities.
    Now, we can start the next phase with the implementation of a related prototyp for mass production.

    Potentially, we realize and present some of our new technologies as works of art. :)

    Investigations::Multimedia

  • University of Rochester: In the year 2016, some ingenious (not really) scientists of the University of Rochester have presented another cloaking system, which is designated as a digital cloak. We quote a related report: "Now you see me... Researchers bring us one step closer to the magic of Harry Potter's invisibility cloak with incredible scientific breakthrough
    Researchers from the University of Rochester in New York have released a video showing how you can now move an object which is cloaked by a device they have created and it still remains hidden to the human eye.
    The invention follows in the footsteps of the Rochester Cloak, unveiled in 2014, which uses four lenses in a line at specific distances from each other to make objects appear invisible.
    The scientists have now been able to use flat screen displays to extend the range of angles that can be hidden from view.
    Their method lays out how cloaks of arbitrary shapes, that work from multiple viewpoints, may be practically realized in the near future using commercially available digital devices.
    The clip shows PhD student [...] using a camera, a [tablet computer,] and a special lenticular lens. He films the background before processing it so it can be displayed on the [tablet computer] thought [as(?)] the lens.
    Usually the viewer at this stage could spot the difference between the background and a video of it played on a screen in front by changing their point of view. But the researchers explain in the video: 'This system calculates the direction and position of the light rays so they can be properly displayed as if they were unobstructed. [...] As the viewpoint shifts, the image on the display changes accordingly, keeping it aligned with the background.' As a result, the area behind the display is effectively cloaked.
    One problem [of] the device so far is the poor resolution of the image, which is significantly lower than the resolution of the purely optical device. Furthermore if the image behind the screen alters, the effect is lost as the background would need to be filmed and processed again, which would take several minutes. However [the Phd student] his adviser Professor [...] are hoping to soon be able to produce the same effect in real time.
    The Rochester Digital Cloak is patent pending."

    A lenticular lens is also used with handheld gaming consoles and 3D palm computers.

    But we have found two little problems with this "incredible scientific breakthrough":
    One problem is the existence of the multimedia works of art titled CloakWear and CloakWatch, and created by C.S., which

  • even work in real time already by using our MobileKinetic technology configured with a 3D camera,
  • even utilize the Mediated Reality (MedR) paradigm, including the Augmented Reality (AR) paradigm, and
  • even were created and listed in the Innovation-Pipeline of Ontonics before the presentation of the Rochester Cloak (see the Ontonics Website update of the 10th of August 2013).

    The other problem is that deliberately filing a patent for a known item is an infringement of the patent law and if such a patent has been issued then it has to be removed from the patent roll again.


    25.August.2018

    06:01 and 26:10 UTC+2
    SOPR #137

    *** Beautifying mode ***
    We discuss the following topics we were thinking about in the last weeks:

  • common reality anchors and frames,
  • voice-based systems and virtual assistants, and
  • legal matter.

    Common reality anchors and frames
    We are still working on the issues #133 of the 15th of August 2018, #135 of the 17th of August 2018, and #136 of the 21st of August 2018.

    We explained in the message OS is On of the 9th of May 2016 that even a spoken sentence can be used as a means for interaction and operation, because said spoken sentence has the same status and function like a Uniform Resource Locator (URL) (see also the OntoLix and OntoLinux Further steps of the 16th of August 2018).
    Voice-based systems and virtual assistants have not been unified and united by the SOPR so far but this will be introduced as well. For the implementation of this system functionality, we need something like Speech IDs, Gesture IDs, Skill IDs, and so on as some kind of common reality anchors used as reference frames, as it is already done with the anchors of Mediated Reality (MedR) cloud computing platforms and the Time IDs and the Space IDs of our Ontologic Net (ON), Ontologic Web (OW), and Ontologic uniVerse (OV), as part of the common reality frames.
    As in the cases of other fields, the MedR anchors used as reference frames of related Ontologic Applications and Ontologic Services (OAOS) are overlapping with the infrastructure of our SOPR or ON, OW, and OV. The regulation in respect to the infrastructure is crystal clear.

    Voice-based systems and virtual assistants
    In all cases of fields, like for example voice-based systems, virtual assistants, digital currencies, distributed ledgers, and cloud computing platforms, two main questions are how to

  • separate the
    • private data and other digitial properties of the users,
    • virtual or digital estate of the manufacturers of Ontologic Systems (OSs) and the providers of Ontologic Appplications and Ontologic Servicse (OAOS), and
    • common virtual of digital estate of all SOPR members respectively common items of the ON, OW, and OV,

    and

  • keep them separated

    (see also the issue #136 of the 21st of August 2018 once again).

    OSC and OAOS?: Computer says Yes.
    ON, OW, and OV infrastructure?: Computer says Maybe. :D
    SOPR infrastructure? Computer says No.

    Legal matter
    Because we are observing that our liberal regulations are exploited in various ways, we would like to recall that the reasons and considerartions for introducing the SOPR at all

  • were not an
    • economic pressure,
    • infringement of our rights, like for example the massive
      • copryright infringements and
      • unfair business practices in general, like blackmailing and abusing the market power in other ways by large companies,
    • or both,

    which are manageable by our many years lasting preparation, but

  • were the
    • personal interest of C.S. in
      • letting the public intact with and even use our works of art titled Ontologic System and Ontoscope, and
      • continuing the discussions in this way,
    • social interest of other elements of the societies, and
    • facts that
      • an expropriation of the works of art created by C.S. is not realizable and manageable in practice due to the huge compensation that every constitutional state would have to pay, and
      • the only reasonable solution and compromise is the licensing of our works of art by following the Reasonable and non-discriminatory (RAND) terms, also known as Fair, Reasonable, and Non-Discriminatory (FRAND) terms,

    For a better understanding of the reasons why an expropriation is not realizable and manageable we also repeat/recall some of the related notes:

  • "[An] expropriation without any compensating measure is a frontal attack on the social contract, especially the public peace or peace under law, the legal order, and the basic principle of unity, justice, and freedom, as well as the democracy eventually. The time of [Hans] Kohlhase was 500 years ago.", [comment on the 9th of September 2017].
  • "On the one side we have the social interest [in the next generation of the Internet and World Wide Web in particular as well as new technological developments in general], which is sufficiently weighty for an expropriation respectively a compulsory purchase of our Intellectual Properties (IPs).
    On the other side such an expropriation requires formal actions by the governments, for example the introduction of a special law, in addition to a reasonable compensation payment for the actual value and the loss of future revenues according to the common valuation or street price.
    An alternative is that everybody is allowed to participate for a reasonable compensation, such as a fee or a share.", [issue #25 of the 6th of October 2017],
  • A "potential expropriation demands a special act in every democracy and also a compensation in accordance with the street value (of the [Ontologic System (OS) with its Ontologic Net (ON), Ontologic Web, and Ontologic uniVerse (OV), including the] Fourth Industrial Revolution[, and the Ontoscope (Os)]), which cannot be estimated and would exceed tens of trillions of U.S. Dollar if done legally.", [comment Oh, what ...? on the 23rd of March 2018].
  • "SOPR sovereignty vs. cyber sovereignty,
    • there will be no cold expropriation by any government,
    • SOPR will keep its domiciliary right and hence remain some kind of a gatekeeper managing [the Ontologic System]", [issue #114 Preview of the 26th of March 2018].


    26.August.2018
    Clarification
    Most potentially some years ago already, we have explained some details of the matter discussed here, but due to actual developments we would like to continue with our related explanations and clarifications. Augmented Reality (AR) cloud computing systems and location-based services utilizing for example anchor-based shared reference frames, are included in the multimedia work of art described in the OntoLinux Further steps of the 26th of November 2010, which

  • integrates the fields of
    • crowdsourcing,
    • image and video sharing,
    • web or online mapping,
    • photogrammetry,
    • Image-Based Modeling and Rendering (IBMR) respectively Structure from Motion (SfM) systems (set of images, image features, and image matches as input, and production of a 3D reconstruction of camera and (sparse) scene geometry as output and providing related functionalities by utilizing control-adaptive reprojection techniques and other techniques), which is based on for example the technique of Bundle Adjustment (BA) (feature-based multiple view reconstruction vision), and
    • Distributed System (DS), specifically
      • Peer-to-Peer (P2P) computing and
      • cloud computing,

      on top of the OntoBot and OntoScope software components, which again integrate the fields of

      • Computer Vision (CV),
      • Cognitive Vision (CogV),
      • Image-Based Modeling and Rendering (IBMR),
      • Mediated Reality (MedR),
      • etc.,
  • was also described with the

    and

  • is eventually included in our Ontologic System (OS) like all other the components, systems, and services listed above (see the webpage Components and the sections Mixed Reality and Earth Simulation/Virtual Globe of the webpage Links to Software of the website of OntoLinux).

    All works were created by C.S..

    In fact, the system titled "Photo Tourism"

  • is the predecessor system of the SfM system Bundler,
  • is classified as "Information Interfaces and Presentation: Multimedia Information Systems - Artificial, augmented, and virtual realities and Artifcial Intelligence: Vision and Scene Understanding - Modeling and recovery of physical attributes", and related to the "Keywords: image-based rendering, image-based modeling, photo browsing, structure from motion",
  • references
    • Image-Based Modeling (IBM) systems,
    • Image-Based Rendering (IRM) systems, and
    • systems based on geo-location information for
      • image browsing, such as the documents titled
        • "Geographic location tags on digital images",
        • "A systems architecture for ubiquitous video",
      • image retrieval, and
      • semi-automatic and automatic image annotation, such as the document titled
        • "A touring machine: Prototyping 3d mobile augmented reality systems for exploring the urban environment" based on CV,
      • location retrieval, such as the document titled
        • "A system for automatic pose-estimation from a single image in a city scene",

        as well as

      • navigation,

    and

  • is capable to "handle partial and full occlusions" "taking into account [...] the motions of these objects under changes in viewpoint" for example.

    But Bundler was presented in the year 2008 and in the related works the following items are completely missing:

  • utilization of data provided by a 3D camera or a 2D or 3D scanner,
  • processing of data in real-time, specifically on the basis of a Simultaneous Localization And Mapping (SLAM) system,
  • integration of a game engine,
  • shared or multi-user experience,
  • operation of a Mediated Reality Environment (MedRE), specifically an Augmented Reality Environment (ARE) and a Virtual Reality Environment (VRE), <-- ARE and VRE added on the 3rd of September 2018 -->
  • operation of a handheld device in real-time,
  • provision of a cloud computing service respectively something as a Service (aaS), specifically location-based services, which refers to web mapping of goods and services, and
  • Cyber-Physical Systems (CPS), Internet of Things (IoT), and Networked Embedded Systems (NES), as well as
  • integration of the
    • works referenced in the Photo Tourism system,
    • the elements of our special project listed above at the beginning , and/or
    • other functionalities discussed elsewhere (see also the last section of the OntoLix and OntoLinux Website update of the 11th of April 2014, the issue SOPR #134 of the 16th of August 2018, and the Clarification of the 17th and also 22nd of August 2018)

    in contrast to us with our integrating Ontologic System Architecture (OSA), which connects and integrates everything with each other and even adds more features and functionalities.

    Furthermore, some projects of the BigSFM project done after the 26th of November 2010 seems to based on our works as well, like the projects

  • "MatchMiner: Efficiently Finding Connected Components in Large Image Collections" based on a graph and "information-theoretic algorithm" respectively Algorithmic Information Theory (AIT) and hence on our Ontologic System with OntoBot and OntoScope,
  • "World-Scale Pose Estimation using 3D Point Clouds" based on OntoGlobe,
  • "Network Principles for SfM: Disambiguating Repeated Structures with Local Context" based on a graph analysis and hence on OntoBot with OntoScope, and
  • "Robust Global Translations with 1DSfM" based on data captured with a 3D camera.

    Ontonics Further steps
    We continued the work on a component and a module that we improved in a different way.


    28.August.2018

    05:55, 18:25, 21:22, and 24:26 UTC+2
    SOPR #138

    *** Work in progress - 2nd section about confusion still unclear ***
    In this issue we focus on formal matter, especially the following topics:

  • cost of SOPR infrastructure,
  • system, application, and service vs. infrastructure,
  • legal matters, and
  • diverses.

    Cost of SOPR infrastructure
    After a review of former issues we saw that we already commited ourselves to pay the costs for the infrastructure of our Society for Ontological Performance and Reproduction (SOPR) but were less clear in later issues. Nevetheless, we have to view the details of the matter when it is developing, like in the cases where we have to decide if an activity is a licensable system, application, and service on the one side or an element of the infrastructure on the other side, and the SOPR has to proceed the procurement.

    System, application, and service vs. infrastructure
    In the last past, we have used phrases like "on our OS respectively in our OS" and "with our OS respectively [...] in our OS". The reason for wordings like these is that

  • phrases like "on our OS" and "with our OS" are related to the software-based Ontologic System Components (OSC) and Ontologic Applications and Ontologic Services (OAOS), and the related part of the integrating Ontologic System Architecture (OSA), which are the parts that can be
    • operated in the (entire) Ontologic System (OS) and also
    • licensed for reproduction and performance in accordance with the Articles of Association (AoA) and the Terms of Service (ToS) of our SOPR,

    while

  • the phrase "in our OS" is related to the entire Ontologic System (OS), which goes even beyond the Ontologic uniVerse (OV) and can be licensed only in parts for reproduction and performance as OSC, OAOS, and OSA.

    These different meanings of the designation Ontologic System and its acronym OS might be a source of confusion, which should be avoided by using the term Ontologic System Components and its acronym OSC instead, specifically

  • in the AoA and the ToS with the License Model (LM) of our SOPR and
  • in relation with the separation of the OSC, the OAOS, and related platforms on the one side and an element of the infrastructure of our SOPR on the other side.

    Legal matter
    When we look at various companies, then we are able to classify them into the following three different groups:

  • Companies of the first group have own ideas, create own concepts, and develope own systems, applications, and services.
  • Companies of the second group act in the same ways like the companies of the first group but in addition copy elements of our Intellectual Properties (IPs) or mimick us in other ways or do both.
  • Companies of the third group have no ideas, create nothing, and even do not imitate us, but are merely reflecting us and marketing our Intellectual Properties (IPs), works, and activities, and eventually act in similar ways like for example patent trolls.

    Now, we found out that this third group annoys us most, which is not only parasitic, doubtlessly, because it is not the pioneering actings and the competition of individual entities that the societies want to see and support.
    Not surprisingly, these other cases are also the cases that are related to the issues of

  • system, application, and service vs. infrastructure and
  • monopolism.

    To handle these reflective cases we will extend the AoA of our SOPR with one or more provisions like the following ones:

  • If the revenue or the profit or both of a corporation and member of the SOPR exceeds a specific ratio or percentage, then the related rights are activated, as listed in the following:
    • 1/2 or 50.0 ... 0% - silent board observer seat
    • 2/3 or 66.6 ... 6% - board observer seat
    • 3/4 or 75.0 ... 0% - voting board seat
    • 4/5 or 80.0 ... 0% - Golden Key to the whole corporation meaning access to every real and virtual data and areas
    • 5/6 or 83.3 ... 3% - exclusive option for friendly takeover based on actual street value of the corporation.
  • 10% of the royalties due have to be transfered in shares of a corporation.

    Self-evidently, such provisions are

  • bound to C.S., a heir of C.S.' IPs, and an official representative of C.S.,
  • related to a parent corporation or only a subsidiary,
  • reasonable, and also
  • for the better of, beneficial for, and in harmony with the management and development of a related corporation.

    Yes, we are creative in this field as well. :)

    Diverses
    We have sighted some very nice

  • clubhouses for our SOPR members as well as
  • areas for our hover stations.

    Enough talk - More action


    31.August.2018
    Ontonics Further steps
    We looked in more detail at a specific variant of a component, but we are not sure if it is an improvement.

    We also developed new variant of a component, new variant of a module, and new variant of a device based on it.

  •    
     
    © or ® or both
    Christian Stroetmann GmbH
    Disclaimer