Search by property

Jump to: navigation, search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Has future work" with value "No Future work exists.". Since there have been only a few results, also nearby values are displayed.

Showing below up to 11 results starting with #1.

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)


    

List of results

  • Updating Relational Data via SPARQL/Update  + (Future work is planned for various aspects
    Future work is planned for various aspects of OntoAccess. Further research needs to be done on bridging the conceptual gap between RDBs and the Semantic Web. Ontology- based write access to the relational data creates completely new challenges on this topic with respect to read-only approaches. The presence of schema constraints in the database can lead to the rejection of update requests that would otherwise be accepted by a native triple store. A feedback protocol that provides semantically rich information about the cause of a rejection and possible directions for improvement plays a major role in bridging the gap. Other database constraints such as assertions have to be evaluated as well to see if they can reasonably be supported in the mapping. Also, a more formal definition of the mapping language will be provided. Furthermore, we will extend our prototype implementation to support the SPARQL/Update MODIFY operation, SPARQL queries, and the just mentioned feedback protocol.
    and the just mentioned feedback protocol.)
  • Discovering and Maintaining Links on the Web of Data  + (Future work on Silk will focus on the foll
    Future work on Silk will focus on the following areas: We will implement further similarity metrics to support a broader range of linking use cases. To assist users in writing Silk-LSL specifications, machine learning techniques could be employed to adjust weightings or optimize the structure of the matching specification. Finally, we will evaluate the suitability of Silk for detecting duplicate entities within local datasets instead of using it to discover links between disparate RDF data sources. The value of the Web of Data rises and falls with the amount and the quality of links between data sources. We hope that Silk and other similar tools will help to strengthen the linkage between data sources and therefore contribute to the overall utility of the network.
    ute to the overall utility of the network.)
  • Adaptive Integration of Distributed Semantic Web Data  + (Future work will aim to investigate other
    Future work will aim to investigate other data sets with different characteristics and larger data sets. As the approach presented in this paper focuses on efficiently executing a specific kind of query, that of adaptively ordering multiple joins, further work will focus on optimising other kinds of queries and implementing support for more SPARQL query language features. Future work will also concentrate on investigating how the work can be applied in various domains.
    he work can be applied in various domains.)
  • Analysing Scholarly Communication Metadata of Computer Science Events  + (In further research, we aim to expand the
    In further research, we aim to expand the analysis to other fields of science and to smaller events. Also, it is interesting to assess the impact of digitisation with regard to further scholarly communication means, such as journals (which are more important in fields other than computer science), workshops, funding calls and proposal applications as well as awards. Although large parts of our analysis methodology are already automated, we plan to further optimise the process so that analysis can be almost instantly generated from the OpenResearch data basis.
    enerated from the OpenResearch data basis.)
  • Querying Distributed RDF Data Sources with SPARQL  + (In further work, we plan to work on mappin
    In further work, we plan to work on mapping and translation rules between the vocabularies used by different SPARQL endpoints. Also, we will investigate generalizing the query patterns that can be handled and blank nodes and identity relationships across graphs.
    and identity relationships across graphs.)
  • Integration of Scholarly Communication Metadata using Knowledge Graphs  + (In the context of the OSCOSS project on Opening Scholarly Communication in the Social Sciences, the SCM-KG approach will be used for providing authors with precise and complete lists of references during the article writing process.)
  • ANAPSID: An Adaptive Query Processing Engine for SPARQL Endpoints  + (In the future we plan to extend ANAPSID wi
    In the future we plan to extend ANAPSID with more powerful and lightweight operators like Eddy and MJoin, which are able to route received responses through different operators, and adapt the execution to unpredictable delays by changing the order in which each data item is routed.
    e order in which each data item is routed.)
  • Towards a Knowledge Graph Representing Research Findings by Semantifying Survey Articles  + (Integrating our methodology with the proce
    Integrating our methodology with the procedure of publishing survey articles can help to create a paradigm shift. We plan to further extend the ontology to cover other research methodologies and fields. For a more robust implementation of the proposed approach, we are planning to use and significantly expand the OpenResearch.org platform and a user-friendly SPARQL auto-generation services for accessing metadata analysis for non-expert users. More comprehensive evaluation of the services will be done after the implementation of the curation, exploration and discovery services. In addition, our intention is to develop and foster a living community around OpenResearch.org and SemSur, to extend the ontology and to ingest metadata to cover other research fields.
    t metadata to cover other research fields.)
  • AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies  + (No data available now.)
  • Accessing and Documenting Relational Databases through OWL Ontologies  + (No data available now.)
  • DataMaster – a Plug-in for Importing Schemas and Data from Relational Databases into Protégé  + (No data available now.)
  • From Relational Data to RDFS Models  + (No data available now.)
  • D2RQ – Treating Non-RDF Databases as Virtual RDF Graphs  + (No future work exists.)
  • A Semantic Web Middleware for Virtual Data Integration on the Web  + (Other future work will be the support for
    Other future work will be the support for DESCRIBE-queries and IRIs as subjects. In future, the mediator should also use an OWL-DL reasoner to infer additional types for subject nodes specified in the query pattern. Currently, types have to be explicitly specified for each BGP (more precisely for the first occurrence: the algorithm caches already known types). OWL-DL constraints like for example a qualified cardinality restriction on obs:byObserver with owl:allValuesFrom obs:Observer would allow the mediator to deduce types of other nodes in the query pattern.
    types of other nodes in the query pattern.)
  • Avalanche: Putting the Spirit of the Web back into Semantic Web Querying  + (The Avalanche system has shown how a compl
    The Avalanche system has shown how a completely heterogeneous distributed query engine that makes no assumptions about data distribution could be implemented. The current approach does have a number of limitations. In particular, we need to better understand the employed objective functions for the planner, investigate if the requirements put on participating triple-stores are reasonable, explore if Avalanche can be changed to a stateless model, and empirically evaluate if the approach truly scales to large number of hosts. Here we discuss each of these issues in turn. The core optimization of the Avalanche system lies in its cost and utility function. The basic utility function only considers possible joins with no information regarding the probability of the respective join. The proposed utility extension UE estimates the join probability of two highly selective molecules. Although this improves the accuracy of the objective function, its limitation to highly selective molecules is often impractical, as many queries (such as our example query) combine highly selective molecules with non-selective ones. Hence, we need to find a probabilistic distributed join cardinality estimation for low selectivity molecules. One approach might be the usage of bloom-filter caches to store precomputed, “popular” estimates. Another might be investigating sampling techniques for distributed join estimation. In order to support Avalanche existing triple-stores should be able to: – report statistics: cardinalities, bloom filters, other future extensions – support the execution of distributed joins (common in distributed databases), which could be delegated to an intermediary but would be inefficient – share the same key space (can be URIs but would result in bandwidth intensive joins and merges) Whilst these requirements seem simple we need to investigate how complex these extensions of triple-stores are in practice. Even better would be an extension of the SPARQL standard with the above-mentioned operations, which we will attempt to propose. The current Avalanche process assumes that hosts keep partial results throughout plan execution to reduce the cost of local database operations and that result-views are kept for the duration of a query. This limits the number of queries a host can handle. We intend to investigate if a stateless approach is feasible. Note that the simple approach—the use of REST-full services—may not be applicable as the size of the state (i.e., the partial results) may be huge and overburden the available bandwidth. We designed Avalanche with the need for high scalability in mind. The core idea follows the principle of decentralization. It also supports asynchrony using asynchronous HTTP requests to avoid blocking, autonomy by delegating the coordination and execution of the distributed join/update/merge operations to the hosts, concurrency through the pipeline shown in Figure 1, symmetry by allowing each endpoint to act as the initiating Avalanche node for a query caller, and fault tolerance through a number of time-outs and stopping conditions. Nonetheless, an empirical evaluation of Avalanche with a large number of hosts is still missing—a non-trivial shortcoming (due to the lack of suitable, partitioned datasets and the significant experimental complexity) we intend to address in the near future.
    ) we intend to address in the near future.)
  • A Probabilistic-Logical Framework for Ontology Matching  + (The framework is not only useful for align
    The framework is not only useful for aligning concepts and properties but can also include instance matching. For this purpose, one would only need to add a hidden predicate modeling instance correspondences. The resulting matching approach would immediately benefit from probabilistic joint inference, taking into account the interdependencies between terminological and instance correspondences.
    rminological and instance correspondences.)
  • Towards a Knowledge Graph for Science  + (The work presented here delineates our ini
    The work presented here delineates our initial steps towards a knowledge graph for science. By testing existing and developing new components, we have so far focused on some core technical aspects of the infrastructure. Naturally, there are a number of research problems and implementation issues as well as a range of socio-technical aspects that need to be addressed in order to realize the vision. Dimensions of open challenges are, among others: • the low-threshold integration of researchers through methods of crowd-sourcing, human-machine interaction, and social networks; • automated analysis, quality assessment, and completion of the knowledge graph as well as interlinking with external sources; • support for representing fuzzy information, scientific discourse and the evolution of knowledge; • development of new methods of exploration, retrieval, and visualization of knowledge graph information.
    ualization of knowledge graph information.)
  • LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data  + (We aim to explore the combination of LIMES
    We aim to explore the combination of LIMES with active learning strategies in a way, that a manual configuration of the tool becomes unnecessary. Instead, matching results will be computed quickly by using the exemplars in both the source and target knowledge bases. Subsequently, they will be presented to the user who will give feedback to the system by rating the quality of found matches. This feedback in turn will be employed for improving the matching configuration and to generate a revised list of matching suggestions to the user. This iterative process will be continued until a sufficiently high quality (in terms of precision and recall) of matches is reached.
    ecision and recall) of matches is reached.)