top of page

07. An idea around an information gathering-collaborating-documenting tool - Part2

Concepts around tackling the knowledge management problem

07. An idea around an information gathering-collaborating-documenting tool - Part2

Please refer to the Part 1 of this context at 5. An idea around an information gathering-collaborating-documenting tool (mayoan.com)

 

This part 2 blog is to elaborate a little more on the concept around the proposed tool.

This proposed tool sits in layer 2 in the below description.

 



Layer 1:


Layer 1 consists of data rendered in tools like browsers in both normal and incognito mode. The standard data formats are html, rss and the likes.

Once the layer 2 and layer 3 tool behaviours, supported data formats etc., are adequately completed, the layer 1 tools can render them in a read only mode. This will be tremendously useful to any researcher as the rendering builds upon the concepts of knowledge management as applicable to the layer 2 formats and layer 3 formats (e.g.: RDF).

Thus, for example, browsers will render data in 3 modes:

  • Normal mode (normal and incognito setting)

  • Knowledge Digraph mode (based on data emitted by the standards in layer 2) – This covers the data format (RDF) in layer 3 and enhanced with time, locale, geography, categories and subcategories. In other words, a domain specific DSL data definition. In this aspect, we may visualize an inheritance of data structure from a common root knowledge object. This inheritance aspect will flow through categories and subcategories too)

  • RDF Digraph mode (based on RDF format)


Normal browser-based search engines and AI search tools serve data/content.



Layer 2:

This is the layer in which this platform/knowledge-management-tool mentioned in Part 1, is conceptualized.

In this layer, content creators will use this knowledge management platform and select the appropriate categories and subcategories. The tool may suggest one or more nodes from layer 3, as a starting point. The UUID proposed to be used for different contents may also be handy in connecting the knowledge base and developing it as a graph network. Here SMEs for the categories and subcategories may define additional DSLs consisting of information on the subject, time, locale, geography, additional sub-categories etc.,

Again, data gets visualized as nodes (subject & object) and edges(predicate). But now because of the domain(category/sub-category) DSL it is super-imposed, for example, on a map/geography for a given time period covering a particular language or a race! Like the ‘Also recommended’ feature in e-commerce sites, based on the search context, additional network nodes can be built leveraging the relationships from layer 3 and layer 2. This makes knowledge management very easy and keeps wrong/stale data out of internet! The most relevant data and its relationship becomes easily available to researchers in a timely fashion.

A graph database or similar, coupled with search bots are expected to be used to build knowledge graphs. An intelligent/integrated search tool can be expected to be present in this layer-2 tool, to dynamically alter the nodes and relationships as per the search criteria in RDF and DSL contexts.

 

Layer 3:

This builds upon the enhanced current RDF formats. Assuming that, the most useful mode of rendering raw RDF, data is to show it as a network of nodes (subject & object) and relationships(predicates).

We can learn from the example set in the ICANN/IANA where the top-level domain names are controlled. Similarly, the top 3 levels of RDF data consisting of just categories and sub-categories of knowledge is to be set by a central body. How can we arrive at a set of categories and sub-categories to define? One suggestion could be to take a union of all names of educational courses offered in US schools/universities or worldwide educational institutions (in English) and then to de-duplicate them. A careful arrangement of data will render us a hierarchy that is useful for all. This hierarchy should be managed by the central body and additions or modifications should go through a formal approval process.

Tools in layer 3 will help manage data in this hierarchy.

Also, it is expected that the hierarchy in the additional layers (layer 4 and beyond) will be managed by the world-wide universities with an appropriate management/approval process.

A graph database or similar, coupled with search bots are expected to be used to build knowledge graphs. An intelligent/integrated search tool can be expected to be present in this layer-3 tool, to dynamically alter the nodes and relationships as per the search criteria in RDF format only.

 

By following the above approach with the recommended standards, the following are to be noted:

  • A fine-grained approach to knowledge management on a global scale should evolve.

  • Identification of central bodies for managing data in different base layers that not only defines standards/formats but also the integration aspects within a given layer and across the layers should happen sooner than later.

  • Since information is growing every single moment at unprecedented scale, it is high time that we approach the problem of knowledge management in a sound basis.

  • We have to be very resourceful and efficient with our public data (and also on data sharing aspects) especially that around the R&D efforts affecting public life in a significant manner.

I will probably consider using a proprietary technology and protocol to maintain the layer 3 knowledge network. Thus ,we came into the idea of a knowledge network to capture the knowledge. We can consider implementing it with block chain too. All the information/data in layer 2 and layer 3 , in RDF and RDF/DSL formats, respectively, will be held in this network of block chains. I consider calling this as a knowledge network and the hosts as knowledge network caching/hosting servers.


The layer 1 data is just business as usual with web/app servers fielding it in html/rss etc., formats. And from layer 2 and layer 3 , new information will be pumped into the knowledge network and will be continually indexed (data about new node attaching itself to knowledge network at several points or nodes getting removed due to stale data etc.,). Again, considering the hierarchical arrangements we can also update data by pull (parent pulling child node data) or push (child pushing data into the parent) actions with appropriate governance mechanisms. Thus, layer 2 tools will emit static web pages into websites and also into this knowledge network.


If this happen, we can expect new generation browsers, that can fly over this knowledge network and display the initial level information as a graph network with a powerful filter mechanism. This information can be sliced and diced by various DSL/RDF categories and subcategories. Some of the popular filters can be time, time period, geolocation, event, categories, subcategories etc.,

© 2035  Powered and secured by Wix

bottom of page