36
U

Property:Has conclusion

From Openresearch
Jump to: navigation, search

This is a property of type Text.

Pages using the property "Has conclusion"

Showing 11 pages using this property.

View (previous 25 | next 25) (20 | 50 | 100 | 250 | 500)

R
RDB2ONT: A Tool for Generating OWL Ontologies From Relational Database Systems +In this paper, an algorithm and tool for generating OWL ontologies from relational database systems is presented. The tool, RDB2ONT, helps the domain experts to quickly generate and publish OWL ontologies describing the underlying relational database systems while preserving their structural constraints. The generated ontologies are constructed using a set of vocabularies and structures defined in schema that describes relational database systems on the web so they guarantees that user applications can work with data instances that conformed to a set of known vocabularies and structures. The generated ontologies provide a standardized and meaningful way for describing the underlying relational database systems so they “bridge” the semantic gaps between the ontologies describing relational database systems and/or the ontologies describing other data sources on the web such as flat-files, semi-structures, etc. Concepts in OWL ontologies can be defined at multiple levels of granularities thus the generated OWL ontologies can be used to address the semantic heterogeneity problem at multiple levels. Evolutions of database systems in large-scale environments are inevitable so by using the RDB2ONT tool, OWL ontologies can be re-generated with little effort from the domain experts thus speed up the process of facilitating data in the underlying relational database systems with other data sources on the web. Although the generated OWL ontologies provide the explicit meaning of concepts and their semantic relationships between related concepts, there are still many open research questions that need to be addressed. One of the questions is how to merge the generated OWL ontologies into an integrated OWL ontology so that common views of concepts can be achieved? This would allow users to pose queries on the common views of concepts rather than the concepts defined in the individual ontologies.  +
Relational.OWL - A Data and Schema Representation Format Based on OWL +In this paper we have shown how to represent schema and data items originally stored in relational database systems using our own OWL ontology. Relational.OWL enables us to semantically represent the schema of any relational database. This representation itself can be interpreted, due to the properties of OWL Full, as a novel ontology. Based on the latter ontology, we can now semantically represent the data stored in this specific database. The advantage of this representation technique is obvious: Both, schema and data changes can automatically be transferred to and processed by any remote database system, which is able to understand knowledge representation techniques used within OWL. Misunderstandings are impossible.Besides the refinement and completion of the concrete schema representation, we consider on how to adopt our technique to other types of database systems. Similar solutions can easily be found for Object-Oriented Databases, Hierarchical Databases like IMS, or its hybrid the modern and more common X.500 or LDAP Directory Systems  +
S
SERIMI – Resource Description Similarity, RDF Instance Matching and Interlinking +RDF instance matching in the context of interlinking RDF datasets published in the Linked Data Cloud is the task of determining if two resources are referred to the same entity in the real world. This is a challenging task in high demand by data publishers that wish to interlink their datasets in the cloud. In this work, we propose a novel approach, called SERIMI, for solving the RDF instance-matching problem automatically. SERIMI matches instances between a source and target datasets, without prior knowledge of the data, domain or schema of these datasets. It does so by approximating the notion of similarity by pairing instances based on entity labels as well as structural (ontological) context. As part of the SERIMI approach, we proposed the CRDS function to approximate that judgment of similarity. We used two collections proposed by the OAEI 2010 initiative to evaluate SERIMI. On average, SERIMI outperforms two representative systems, RiMOM and ObjectCoref, which tried to solve the same problem using the same collections and reference alignment, in 70% of the cases.  +
SLINT: A Schema-Independent Linked Data Interlinking System +In this paper, we present SLINT, an efficient schema-independent linked data interlinking system. We select important predicates by predicate’s coverage and discriminability. The predicate alignments are constructed and filtered for obtaining key alignments.We implement an adaptive filtering technique to produce candidates and identities. Compare with the most recent systems, SLINT highly outperforms the precision and recall in interlinking. The performance of SLINT is also very high when it takes around 1 minute to detect more than 13,000 identity pairs.  +
SPLENDID: SPARQL Endpoint Federation Exploiting VOID Descriptions +SPLENDID allows for transparent query federation over distributed SPARQL endpoints. In order to achieve a good query execution performance, data source selection and query optimization is based on basic statistical information which is obtained from VOID descriptions. The utilization of open semantic web standards, like VOID and SPARQL endpoints, allows for flexible integration of various distributed and linked RDF data sources. We have described in detail the implementation of the data source selection and the join order optimization. The evaluation shows that our approach can achieve good query performance and is competitive compared to other state-of-the-art federation implementations. In our analysis of the source selection we came to the conclusion that at least predicate and type statistics should be included in VOID description for RDF datasets. The use of 3rd party sameAs links, however, can significantly increase the number of requests and thus, hamper the efficiency of query execution plans. The comparison of the two employed physical join implementations has shown that the network overhead plays an important role. Both hash join and bind join can significantly reduce the query processing time for certain types of queries. With SPLENDID we also like to advocate the adoption of VOID statistics for Linked Data. As next steps, we plan to investigate whether VOID descriptions can easily be extended with more detailed statistics in order to allow for more accurate cardinality estimates and, thus, better query execution plans. On the other hand, the actual query execution has not yet been optimized in SPLENDID. Therefore, we plan to integrate optimization techniques as used in FedX. Moreover, the adoption of the SPARQL 1.1 federation extension will also allow for more efficient query execution.  +
T
Towards a Knowledge Graph Representing Research Findings by Semantifying Survey Articles +In this article, we presented SemSur, a Semantic Survey Ontology, and an approach for creating a comprehensive knowledge graph representing research findings. We see this work as an initial step of a long-term research agenda to create a paradigm shift from document-based to knowledge-based scholarly communication. Our vision is to have this work deployed in an extended version of the existing OpenResearch.org platform. We have created instances of three selected surveys on different fields of research using the SemSur ontology. We evaluated our approach involving nine researchers. As we see in the evaluation results, SemSur enables successful retrieval of relevant and accurate results without users having to spend much time and effort compared to traditional ways. This ontology can have a significant influence on the scientific community especially for researchers who want to create a survey article or write literature on a certain topic. The results of our evaluation show that researchers agree that the traditional way of gathering an overview on a particular research topic is cumbersome and time-consuming. Much effort is needed and important information might be easily overlooked. Collaborative integration of research metadata provided by the community supports researchers in this regard. Interviewed domain experts mentioned that it might be necessary to read and understand 30 to 100 scientific articles to get a proper level of understanding or an overview of a topic or sub-topics. A collaboration of researchers as owners of each particular research work to provide a structured and semantic representation of their research achievements can have a huge impact in making their research more accessible. A similar effort is spent on preparing survey and overview articles.  +
Towards a Knowledge Graph for Science +The transition from purely document-centric to a more knowledge-based view on scholarly communication is in line with the current digital transformation of information flows in general and is thus inevitable. However, this also creates a need for the implementation of corresponding tools and workflows supporting the switch. As of now, there are still very few of those tools, and their design and concrete features remain a challenge that is yet to be tackled – collaboratively and in a coordinated manner.  +
U
Unveiling the hidden bride: deep annotation for mapping and migrating legacy data to the Semantic Web +In this paper, we have described deep annotation, an original framework to provide semantic annotation for large sets of data. Deep annotation leaves semantic data where it can be handled best, viz. in database systems. Thus, deep annotation provides a means for mapping and reusing dynamic data in the Semantic Web with tools that are comparatively simple and intuitive to use. To attain this objective we have defined a deep annotation process and the appropriate architecture. We have incorporated the means for server-side markup that allows the user to define semantic mappings by using OntoMat-Annotizer to create Web presentation-based annotations 12 and OntoMat-Reverse to create schema-based annotations. An ontology and mapping editor and an inference engine are then used to investigate and exploit the resulting descriptions either for querying the database content or to materialize the content into RDF files. In total, we have provided a complete framework and its prototype implementation for deep annotation.  +
Updating Relational Data via SPARQL/Update +In this paper, we presented our approach OntoAccess that enables the manipulation of relational data via SPARQL/Update. We introduced the update-aware RDB to RDF mapping language R3M that captures additional information about the database schema, in particular about integrity constraints. This information enables the detection of update requests that are invalid from the RDB perspective. Such requests cannot be executed by the database engine as they would violate integrity constraints of the database schema. The information can also be exploited to provide semantically rich feedback to the client. Therefore, the causes for the rejection of a request and possible directions for improvement can be reported in an appropriate format.  +
Use of OWL and SWRL for Semantic Relational Database Translation +We are currently applying Automapper's approach to other Semantic Bridges. Specifically, we are exploring its use for both SOAP and RESTful services in our Semantic Bridge for Web Services (SBWS).  +
Z
Zhishi.links Results for OAEI 2011 +In this report, we have presented a brief description of Zhishi.links, an instance matching system. We have introduced the architecture of our system and specific techniques we used. Also, the results have been analyzed in detail and several guides for improvements have been proposed.  +
Facts about "Has conclusion"
Has type
"Has type" is a predefined property that describes the datatype of a property and is provided by Semantic MediaWiki.
Text +