Search by property

Jump to: navigation, search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Has future work" with value "No future work exists.". Since there have been only a few results, also nearby values are displayed.

Showing below up to 12 results starting with #1.

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)


    

List of results

    • A Semantic Web Middleware for Virtual Data Integration on the Web  + (Other future work will be the support for
      Other future work will be the support for DESCRIBE-queries and IRIs as subjects. In future, the mediator should also use an OWL-DL reasoner to infer additional types for subject nodes specified in the query pattern. Currently, types have to be explicitly specified for each BGP (more precisely for the first occurrence: the algorithm caches already known types). OWL-DL constraints like for example a qualified cardinality restriction on obs:byObserver with owl:allValuesFrom obs:Observer would allow the mediator to deduce types of other nodes in the query pattern.
      types of other nodes in the query pattern.)
    • Avalanche: Putting the Spirit of the Web back into Semantic Web Querying  + (The Avalanche system has shown how a compl
      The Avalanche system has shown how a completely heterogeneous distributed query engine that makes no assumptions about data distribution could be implemented. The current approach does have a number of limitations. In particular, we need to better understand the employed objective functions for the planner, investigate if the requirements put on participating triple-stores are reasonable, explore if Avalanche can be changed to a stateless model, and empirically evaluate if the approach truly scales to large number of hosts. Here we discuss each of these issues in turn. The core optimization of the Avalanche system lies in its cost and utility function. The basic utility function only considers possible joins with no information regarding the probability of the respective join. The proposed utility extension UE estimates the join probability of two highly selective molecules. Although this improves the accuracy of the objective function, its limitation to highly selective molecules is often impractical, as many queries (such as our example query) combine highly selective molecules with non-selective ones. Hence, we need to find a probabilistic distributed join cardinality estimation for low selectivity molecules. One approach might be the usage of bloom-filter caches to store precomputed, “popular” estimates. Another might be investigating sampling techniques for distributed join estimation. In order to support Avalanche existing triple-stores should be able to: – report statistics: cardinalities, bloom filters, other future extensions – support the execution of distributed joins (common in distributed databases), which could be delegated to an intermediary but would be inefficient – share the same key space (can be URIs but would result in bandwidth intensive joins and merges) Whilst these requirements seem simple we need to investigate how complex these extensions of triple-stores are in practice. Even better would be an extension of the SPARQL standard with the above-mentioned operations, which we will attempt to propose. The current Avalanche process assumes that hosts keep partial results throughout plan execution to reduce the cost of local database operations and that result-views are kept for the duration of a query. This limits the number of queries a host can handle. We intend to investigate if a stateless approach is feasible. Note that the simple approach—the use of REST-full services—may not be applicable as the size of the state (i.e., the partial results) may be huge and overburden the available bandwidth. We designed Avalanche with the need for high scalability in mind. The core idea follows the principle of decentralization. It also supports asynchrony using asynchronous HTTP requests to avoid blocking, autonomy by delegating the coordination and execution of the distributed join/update/merge operations to the hosts, concurrency through the pipeline shown in Figure 1, symmetry by allowing each endpoint to act as the initiating Avalanche node for a query caller, and fault tolerance through a number of time-outs and stopping conditions. Nonetheless, an empirical evaluation of Avalanche with a large number of hosts is still missing—a non-trivial shortcoming (due to the lack of suitable, partitioned datasets and the significant experimental complexity) we intend to address in the near future.
      ) we intend to address in the near future.)
    • A Probabilistic-Logical Framework for Ontology Matching  + (The framework is not only useful for align
      The framework is not only useful for aligning concepts and properties but can also include instance matching. For this purpose, one would only need to add a hidden predicate modeling instance correspondences. The resulting matching approach would immediately benefit from probabilistic joint inference, taking into account the interdependencies between terminological and instance correspondences.
      rminological and instance correspondences.)
    • Towards a Knowledge Graph for Science  + (The work presented here delineates our ini
      The work presented here delineates our initial steps towards a knowledge graph for science. By testing existing and developing new components, we have so far focused on some core technical aspects of the infrastructure. Naturally, there are a number of research problems and implementation issues as well as a range of socio-technical aspects that need to be addressed in order to realize the vision. Dimensions of open challenges are, among others: • the low-threshold integration of researchers through methods of crowd-sourcing, human-machine interaction, and social networks; • automated analysis, quality assessment, and completion of the knowledge graph as well as interlinking with external sources; • support for representing fuzzy information, scientific discourse and the evolution of knowledge; • development of new methods of exploration, retrieval, and visualization of knowledge graph information.
      ualization of knowledge graph information.)
    • LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data  + (We aim to explore the combination of LIMES
      We aim to explore the combination of LIMES with active learning strategies in a way, that a manual configuration of the tool becomes unnecessary. Instead, matching results will be computed quickly by using the exemplars in both the source and target knowledge bases. Subsequently, they will be presented to the user who will give feedback to the system by rating the quality of found matches. This feedback in turn will be employed for improving the matching configuration and to generate a revised list of matching suggestions to the user. This iterative process will be continued until a sufficiently high quality (in terms of precision and recall) of matches is reached.
      ecision and recall) of matches is reached.)
    • LogMap: Logic-based and Scalable Ontology Matching  + (We are currently working on further optimizations, and in the near future, we are planning to integrate LogMap with a Protege-based front-end, such as the one implemented in our tool ContentMap.)
    • Bringing Relational Databases into the Semantic Web: A Survey  + (We close this paper by mentioning some pro
      We close this paper by mentioning some problems that have only been lightly touched upon by database to ontology mapping solutions as well as some aspects that need to be considered by future approaches. 1. Ontology-based data update. A lot of approaches mentioned offer SPARQL based access to the contents of the database. However, this access is unidirectional. Since the emergence of SPARQL Update that allows update operations on an RDF graph, the idea of issuing SPARQL Update requests that will be transformed to appropriate SQL statements and executed on the underlying relational database has become more and more popular. Some early work has already appeared in the OntoAccess prototype and the extensions on the D2RQ tool, D2RQ/Update 21 and D2RQ++. However, as SPARQL Update is still under devel- opment and its semantics is not yet well defined, there is some ambiguity regarding the transformation of some SPARQL Update statements. Moreover, only basic (relation-to-class and attribute-to-property) mappings have been investigated so far. The issue of updating relational data through SPARQL Update is similar to the classic database view update problem, therefore porting already proposed solutions would contribute significantly in dealing with this issue. 2. Mapping update. Database schemas and ontologies constantly evolve to suit the changing application and user needs. Therefore, established mappings between the two should also evolve, instead of being redefined or rediscovered from scratch. This issue is closely related to the previous one, since modifications in either participating model do not simply incur adaptations to the mapping but also cause some necessary changes to the other model as well. So far, only few solutions have been proposed for the case of the unidirectional propagation of database schema changes to a generated ontology and the consequent adaptation of the mapping. The inverse direction, i.e. modification of the database as a result of changes in the ontology has not been investigated thoroughly yet. On a practical note, both database trigger functions and mechanisms like the Link Maintenance Protocol (WOD-LMP) from the Silk framework could prove useful for solutions to this issue. 3. Generation of Linked Data. A fair number of approaches support vocabulary reuse, a factor that has always been important for the progress of the Semantic Web, while a few other approaches try to discover automatically the most suitable classes or properties from popular vocabularies that can be mapped to a given database schema. Nonetheless, these efforts are still not adequate for the generation of RDF graphs that can be smoothly integrated in the Linking Open Data (LOD) Cloud. For the generation of true Linked Data, the real world entities that database values represent should be identified and links between them should be established, in contrast with the majority of current methods, which translate database values to RDF literals. Lately, a few interesting tools that handle the transformation of spreadsheets to Linked RDF data by analyzing the content of spreadsheet tables have been presented, with the most notable examples being the RDF extension for Google Refine and T2LD. Techniques as the ones applied in these tools can certainly be adapted to the relational database paradigm. These aspects, together with the challenges enumerated in Section 7, mark the next steps for database to ontology mapping approaches. Although a lot of ground has been covered during the last decade, it looks like there is definitely some interesting road ahead in order to seamlessly integrate relational databases with the Semantic Web, turning it into reality at last.
      ntic Web, turning it into reality at last.)
    • Zhishi.links Results for OAEI 2011  + (We look forwards to build an instance matching system with better performance and higher stability in the future.)
    • Optimizing SPARQL Queries over Disparate RDF Data Sources through Distributed Semi-joins  + (We would like to further improve the query
      We would like to further improve the query evaluation performance by introducing a distributed join-aware join reordering. We will make use of the current Sesame optimization techniques for local queries and add our own component which will be re-ordering joins according to their relative costs. The costs will be based on statistics taking into account a sub-query selectivity combined with the distinction whether a triple pattern is supposed to be evaluated locally or at a remote SPARQL endpoint. In addition to join re-ordering we would like to make use of statistics about SPARQL endpoints in order to optimize queries even further. Hopefully the recent initiative called Vocabulary of Interlinked Datasets (http://community.linkeddata.org/MediaWiki/index.php?VoiD) will get to a point where it could be used for this purpose.
      t where it could be used for this purpose.)
    • FedX: Optimization Techniques for Federated Query Processing on Linked Data  + (While we focused on optimization technique
      While we focused on optimization techniques for conjunctive queries, namely basic graph patterns (BGPs), there is additional potential in developing novel, operator-specific optimization techniques for distributed settings (in particular for OPTIONAL queries), which we are planning to address in future work. In a future release, (remote) statistics (e.g., using VoID) can be incorporated for source selection and to further improve our join order algorithm.
      further improve our join order algorithm.)
    • Querying over Federated SPARQL Endpoints : A State of the Art Survey  + ({{{Future work}}})