Search by property

Jump to: navigation, search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Has future work" with value "-". Since there have been only a few results, also nearby values are displayed.

Showing below up to 35 results starting with #1.

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)


    

List of results

    • Cross: an OWL wrapper for teasoning on relational databases  + (A first direction for further work would b
      A first direction for further work would be to try and strengthen the theorem, to have an equivalence of OWL consistency with full legality, i.e. taking into account foreign keys. This could actually be done by using an expressive feature of OWL (the oneOf constructor, not mentioned in this paper), but would possibly make the reasoning intractable. Another solution would be to propose, in a similar way to finite model reasoning, an algorithm of closed world reasoning which would not be allowed to create individuals. We also want to get more experimental results for the Cross implementation. Preliminary results 7 are encouraging: the transformation of the schema of real database (127 tables, 869 columns, 132 unicity constraints, no foreign key) took around 1.5s; the resulting ontology was loaded in Pellet in about 9s, while reasoning took about 3s. Those results seem reasonable for a quite big schema. We now plan to experiment on the use cases presented in Section 6.3 with that database and a sample of other real databases.
      base and a sample of other real databases.)
    • Relational.OWL - A Data and Schema Representation Format Based on OWL  + (A further extension for Relational.OWL cou
      A further extension for Relational.OWL could be a corresponding protocol extending the possibilities of Relational.OWL to particularly support data exchanges or replications. There we could employ the advantages of our knowledge representation technique for recurring problems occurring within such a data exchange process, e.g. identifying the same data items on remote databases. Although autonomously communicating databases in a metadata exchange are still more vision than reality, our model takes us one step further.
      lity, our model takes us one step further.)
    • SLINT: A Schema-Independent Linked Data Interlinking System  + (Although SLINT has good result on tested d
      Although SLINT has good result on tested datasets, it is not sufficient to evaluate the scalability of our system, which we consider as the current limiting point because of the used of weighted co-occurrence matrix. We will investigate about a solution for this issue in our next work. Besides, we also interested in automatic configuration for every threshold used in SLINT and improving SLINT into a novel cross-domain interlinking system.
      a novel cross-domain interlinking system.)
    • SERIMI – Resource Description Similarity, RDF Instance Matching and Interlinking  + (As future work, we intend to investigate h
      As future work, we intend to investigate how our model can be adjusted to consider partial string matching in the similarity function that we proposed, and to accommodate different score distribution metrics as the threshold for the parameter Also, we intend to evaluate this approach in different collections that may provide a more accurate reference alignment than the ones that we used in this work.
      t than the ones that we used in this work.)
    • SPLENDID: SPARQL Endpoint Federation Exploiting VOID Descriptions  + (As next steps, we plan to investigate whet
      As next steps, we plan to investigate whether VOID descriptions can easily be extended with more detailed statistics in order to allow for more accurate cardinality estimates and, thus, better query execution plans. On the other hand, the actual query execution has not yet been optimized in SPLENDID. Therefore, we plan to integrate optimization techniques as used in FedX. Moreover, the adoption of the SPARQL 1.1 federation extension will also allow for more efficient query execution.
      allow for more efficient query execution.)
    • Use of OWL and SWRL for Semantic Relational Database Translation  + (Currently, URIs returned by SBRD are uniqu
      Currently, URIs returned by SBRD are unique but generally not resolvable. We intend to address this issue in future versions by generating resolvable URIs and incorporating the best practices of the Linking Open Data initiative. To the best of our knowledge, we believe that our rules and their usage are consistent with the design goals of the DL Safe SWRL Rules task force4. Decidability is a critical aspect of our architecture and is therefore focused on features such as the use of Horn rules with unary and binary predicates. We will continue to monitor the task force’s progress and incorporate necessary modifications. The advantages of SWRL built-ins have also proven essential. It is our hope that they are addressed in the DL Safe task force and will be comparable to the built-ins provided by SWRL.
      parable to the built-ins provided by SWRL.)
    • Querying the Web of Interlinked Datasets using VOID Descriptions  + (Developing a tool which extracts well-defi
      Developing a tool which extracts well-defined VOID descriptions of datasets, and by this means evaluating our approach is a required future work to confirm applicability of WoDQA on linked open data. Also, evaluating the analysis cost of WoDQA for a large VOID store will be possible when well-defined VOIDs are constructed.
      e when well-defined VOIDs are constructed.)
    • Querying the Web of Data with Graph Theory-based Techniques  + (Explore the co-reference issue in the Linked Data cloud. From the perspective of distributed SPARQL queries, this issue is getting worse as more data are published, and we plan to address this issue by using our Virtual Graph approach.)
    • Unveiling the hidden bride: deep annotation for mapping and migrating legacy data to the Semantic Web  + (For the future, there is a long list of op
      For the future, there is a long list of open issues concerning deep annotation—from the more mundane, though important, ones (top) to far-reaching ones (bottom): (1) Granularity: So far we have only considered atomic database fields. For instance, one may find a string “Proceedings of the Eleventh International World Wide Web Conference, WWW2002, Honolulu, Hawaii, USA, 7–11 May 2002.” as the title of a book whereas one might rather be interested in separating this field into title, location, and date. (2) Automatic derivation of server-side Web page markup: A content management system like Zope could provide the means for automatically deriving server-side Web page markup for deep annotation. Thus, the database provider could be freed from any workload, while allowing for participation in the Semantic Web. Some steps in this direction are currently being pursued in the KAON CMS, which is based on Zope. (3) Other information structures: For now, we have built our deep annotation process on SQL and relational databases. Future schemes could exploit Xquery or an ontology-based query language. (4) Interlinkage: In the future deep annotations may even link to each other, creating a dynamic interconnected Semantic Web that allows translation between different servers. (5) Opening the possibility to directly query the database, certainly creates problems such as new possibilities for denial of service attacks. In fact, queries, e.g. ones that involve too many joins over large tables, may prove hazardous. Nevertheless, we see this rather as a challenge to be solved by clever schemes for CPU processing time (with the possibility that queries are not answered because the time allotted for one query to one user is up) than for a complete “no go.” We believe that these options make deep annotation a rather intriguing scheme on which a considerable part of the Semantic Web might be built.
      e part of the Semantic Web might be built.)
    • Updating Relational Data via SPARQL/Update  + (Future work is planned for various aspects
      Future work is planned for various aspects of OntoAccess. Further research needs to be done on bridging the conceptual gap between RDBs and the Semantic Web. Ontology- based write access to the relational data creates completely new challenges on this topic with respect to read-only approaches. The presence of schema constraints in the database can lead to the rejection of update requests that would otherwise be accepted by a native triple store. A feedback protocol that provides semantically rich information about the cause of a rejection and possible directions for improvement plays a major role in bridging the gap. Other database constraints such as assertions have to be evaluated as well to see if they can reasonably be supported in the mapping. Also, a more formal definition of the mapping language will be provided. Furthermore, we will extend our prototype implementation to support the SPARQL/Update MODIFY operation, SPARQL queries, and the just mentioned feedback protocol.
      and the just mentioned feedback protocol.)
    • Discovering and Maintaining Links on the Web of Data  + (Future work on Silk will focus on the foll
      Future work on Silk will focus on the following areas: We will implement further similarity metrics to support a broader range of linking use cases. To assist users in writing Silk-LSL specifications, machine learning techniques could be employed to adjust weightings or optimize the structure of the matching specification. Finally, we will evaluate the suitability of Silk for detecting duplicate entities within local datasets instead of using it to discover links between disparate RDF data sources. The value of the Web of Data rises and falls with the amount and the quality of links between data sources. We hope that Silk and other similar tools will help to strengthen the linkage between data sources and therefore contribute to the overall utility of the network.
      ute to the overall utility of the network.)
    • Adaptive Integration of Distributed Semantic Web Data  + (Future work will aim to investigate other
      Future work will aim to investigate other data sets with different characteristics and larger data sets. As the approach presented in this paper focuses on efficiently executing a specific kind of query, that of adaptively ordering multiple joins, further work will focus on optimising other kinds of queries and implementing support for more SPARQL query language features. Future work will also concentrate on investigating how the work can be applied in various domains.
      he work can be applied in various domains.)
    • Analysing Scholarly Communication Metadata of Computer Science Events  + (In further research, we aim to expand the
      In further research, we aim to expand the analysis to other fields of science and to smaller events. Also, it is interesting to assess the impact of digitisation with regard to further scholarly communication means, such as journals (which are more important in fields other than computer science), workshops, funding calls and proposal applications as well as awards. Although large parts of our analysis methodology are already automated, we plan to further optimise the process so that analysis can be almost instantly generated from the OpenResearch data basis.
      enerated from the OpenResearch data basis.)
    • Querying Distributed RDF Data Sources with SPARQL  + (In further work, we plan to work on mappin
      In further work, we plan to work on mapping and translation rules between the vocabularies used by different SPARQL endpoints. Also, we will investigate generalizing the query patterns that can be handled and blank nodes and identity relationships across graphs.
      and identity relationships across graphs.)
    • Integration of Scholarly Communication Metadata using Knowledge Graphs  + (In the context of the OSCOSS project on Opening Scholarly Communication in the Social Sciences, the SCM-KG approach will be used for providing authors with precise and complete lists of references during the article writing process.)
    • ANAPSID: An Adaptive Query Processing Engine for SPARQL Endpoints  + (In the future we plan to extend ANAPSID wi
      In the future we plan to extend ANAPSID with more powerful and lightweight operators like Eddy and MJoin, which are able to route received responses through different operators, and adapt the execution to unpredictable delays by changing the order in which each data item is routed.
      e order in which each data item is routed.)
    • Towards a Knowledge Graph Representing Research Findings by Semantifying Survey Articles  + (Integrating our methodology with the proce
      Integrating our methodology with the procedure of publishing survey articles can help to create a paradigm shift. We plan to further extend the ontology to cover other research methodologies and fields. For a more robust implementation of the proposed approach, we are planning to use and significantly expand the OpenResearch.org platform and a user-friendly SPARQL auto-generation services for accessing metadata analysis for non-expert users. More comprehensive evaluation of the services will be done after the implementation of the curation, exploration and discovery services. In addition, our intention is to develop and foster a living community around OpenResearch.org and SemSur, to extend the ontology and to ingest metadata to cover other research fields.
      t metadata to cover other research fields.)
    • A Survey of Current Link Discovery Frameworks  + (No Future work exists.)
    • AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies  + (No data available now.)
    • Accessing and Documenting Relational Databases through OWL Ontologies  + (No data available now.)
    • DataMaster – a Plug-in for Importing Schemas and Data from Relational Databases into Protégé  + (No data available now.)
    • From Relational Data to RDFS Models  + (No data available now.)
    • D2RQ – Treating Non-RDF Databases as Virtual RDF Graphs  + (No future work exists.)
    • A Semantic Web Middleware for Virtual Data Integration on the Web  + (Other future work will be the support for
      Other future work will be the support for DESCRIBE-queries and IRIs as subjects. In future, the mediator should also use an OWL-DL reasoner to infer additional types for subject nodes specified in the query pattern. Currently, types have to be explicitly specified for each BGP (more precisely for the first occurrence: the algorithm caches already known types). OWL-DL constraints like for example a qualified cardinality restriction on obs:byObserver with owl:allValuesFrom obs:Observer would allow the mediator to deduce types of other nodes in the query pattern.
      types of other nodes in the query pattern.)
    • Avalanche: Putting the Spirit of the Web back into Semantic Web Querying  + (The Avalanche system has shown how a compl
      The Avalanche system has shown how a completely heterogeneous distributed query engine that makes no assumptions about data distribution could be implemented. The current approach does have a number of limitations. In particular, we need to better understand the employed objective functions for the planner, investigate if the requirements put on participating triple-stores are reasonable, explore if Avalanche can be changed to a stateless model, and empirically evaluate if the approach truly scales to large number of hosts. Here we discuss each of these issues in turn. The core optimization of the Avalanche system lies in its cost and utility function. The basic utility function only considers possible joins with no information regarding the probability of the respective join. The proposed utility extension UE estimates the join probability of two highly selective molecules. Although this improves the accuracy of the objective function, its limitation to highly selective molecules is often impractical, as many queries (such as our example query) combine highly selective molecules with non-selective ones. Hence, we need to find a probabilistic distributed join cardinality estimation for low selectivity molecules. One approach might be the usage of bloom-filter caches to store precomputed, “popular” estimates. Another might be investigating sampling techniques for distributed join estimation. In order to support Avalanche existing triple-stores should be able to: – report statistics: cardinalities, bloom filters, other future extensions – support the execution of distributed joins (common in distributed databases), which could be delegated to an intermediary but would be inefficient – share the same key space (can be URIs but would result in bandwidth intensive joins and merges) Whilst these requirements seem simple we need to investigate how complex these extensions of triple-stores are in practice. Even better would be an extension of the SPARQL standard with the above-mentioned operations, which we will attempt to propose. The current Avalanche process assumes that hosts keep partial results throughout plan execution to reduce the cost of local database operations and that result-views are kept for the duration of a query. This limits the number of queries a host can handle. We intend to investigate if a stateless approach is feasible. Note that the simple approach—the use of REST-full services—may not be applicable as the size of the state (i.e., the partial results) may be huge and overburden the available bandwidth. We designed Avalanche with the need for high scalability in mind. The core idea follows the principle of decentralization. It also supports asynchrony using asynchronous HTTP requests to avoid blocking, autonomy by delegating the coordination and execution of the distributed join/update/merge operations to the hosts, concurrency through the pipeline shown in Figure 1, symmetry by allowing each endpoint to act as the initiating Avalanche node for a query caller, and fault tolerance through a number of time-outs and stopping conditions. Nonetheless, an empirical evaluation of Avalanche with a large number of hosts is still missing—a non-trivial shortcoming (due to the lack of suitable, partitioned datasets and the significant experimental complexity) we intend to address in the near future.
      ) we intend to address in the near future.)
    • A Probabilistic-Logical Framework for Ontology Matching  + (The framework is not only useful for align
      The framework is not only useful for aligning concepts and properties but can also include instance matching. For this purpose, one would only need to add a hidden predicate modeling instance correspondences. The resulting matching approach would immediately benefit from probabilistic joint inference, taking into account the interdependencies between terminological and instance correspondences.
      rminological and instance correspondences.)
    • Towards a Knowledge Graph for Science  + (The work presented here delineates our ini
      The work presented here delineates our initial steps towards a knowledge graph for science. By testing existing and developing new components, we have so far focused on some core technical aspects of the infrastructure. Naturally, there are a number of research problems and implementation issues as well as a range of socio-technical aspects that need to be addressed in order to realize the vision. Dimensions of open challenges are, among others: • the low-threshold integration of researchers through methods of crowd-sourcing, human-machine interaction, and social networks; • automated analysis, quality assessment, and completion of the knowledge graph as well as interlinking with external sources; • support for representing fuzzy information, scientific discourse and the evolution of knowledge; • development of new methods of exploration, retrieval, and visualization of knowledge graph information.
      ualization of knowledge graph information.)
    • LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data  + (We aim to explore the combination of LIMES
      We aim to explore the combination of LIMES with active learning strategies in a way, that a manual configuration of the tool becomes unnecessary. Instead, matching results will be computed quickly by using the exemplars in both the source and target knowledge bases. Subsequently, they will be presented to the user who will give feedback to the system by rating the quality of found matches. This feedback in turn will be employed for improving the matching configuration and to generate a revised list of matching suggestions to the user. This iterative process will be continued until a sufficiently high quality (in terms of precision and recall) of matches is reached.
      ecision and recall) of matches is reached.)
    • LogMap: Logic-based and Scalable Ontology Matching  + (We are currently working on further optimizations, and in the near future, we are planning to integrate LogMap with a Protege-based front-end, such as the one implemented in our tool ContentMap.)
    • Bringing Relational Databases into the Semantic Web: A Survey  + (We close this paper by mentioning some pro
      We close this paper by mentioning some problems that have only been lightly touched upon by database to ontology mapping solutions as well as some aspects that need to be considered by future approaches. 1. Ontology-based data update. A lot of approaches mentioned offer SPARQL based access to the contents of the database. However, this access is unidirectional. Since the emergence of SPARQL Update that allows update operations on an RDF graph, the idea of issuing SPARQL Update requests that will be transformed to appropriate SQL statements and executed on the underlying relational database has become more and more popular. Some early work has already appeared in the OntoAccess prototype and the extensions on the D2RQ tool, D2RQ/Update 21 and D2RQ++. However, as SPARQL Update is still under devel- opment and its semantics is not yet well defined, there is some ambiguity regarding the transformation of some SPARQL Update statements. Moreover, only basic (relation-to-class and attribute-to-property) mappings have been investigated so far. The issue of updating relational data through SPARQL Update is similar to the classic database view update problem, therefore porting already proposed solutions would contribute significantly in dealing with this issue. 2. Mapping update. Database schemas and ontologies constantly evolve to suit the changing application and user needs. Therefore, established mappings between the two should also evolve, instead of being redefined or rediscovered from scratch. This issue is closely related to the previous one, since modifications in either participating model do not simply incur adaptations to the mapping but also cause some necessary changes to the other model as well. So far, only few solutions have been proposed for the case of the unidirectional propagation of database schema changes to a generated ontology and the consequent adaptation of the mapping. The inverse direction, i.e. modification of the database as a result of changes in the ontology has not been investigated thoroughly yet. On a practical note, both database trigger functions and mechanisms like the Link Maintenance Protocol (WOD-LMP) from the Silk framework could prove useful for solutions to this issue. 3. Generation of Linked Data. A fair number of approaches support vocabulary reuse, a factor that has always been important for the progress of the Semantic Web, while a few other approaches try to discover automatically the most suitable classes or properties from popular vocabularies that can be mapped to a given database schema. Nonetheless, these efforts are still not adequate for the generation of RDF graphs that can be smoothly integrated in the Linking Open Data (LOD) Cloud. For the generation of true Linked Data, the real world entities that database values represent should be identified and links between them should be established, in contrast with the majority of current methods, which translate database values to RDF literals. Lately, a few interesting tools that handle the transformation of spreadsheets to Linked RDF data by analyzing the content of spreadsheet tables have been presented, with the most notable examples being the RDF extension for Google Refine and T2LD. Techniques as the ones applied in these tools can certainly be adapted to the relational database paradigm. These aspects, together with the challenges enumerated in Section 7, mark the next steps for database to ontology mapping approaches. Although a lot of ground has been covered during the last decade, it looks like there is definitely some interesting road ahead in order to seamlessly integrate relational databases with the Semantic Web, turning it into reality at last.
      ntic Web, turning it into reality at last.)
    • Zhishi.links Results for OAEI 2011  + (We look forwards to build an instance matching system with better performance and higher stability in the future.)
    • Optimizing SPARQL Queries over Disparate RDF Data Sources through Distributed Semi-joins  + (We would like to further improve the query
      We would like to further improve the query evaluation performance by introducing a distributed join-aware join reordering. We will make use of the current Sesame optimization techniques for local queries and add our own component which will be re-ordering joins according to their relative costs. The costs will be based on statistics taking into account a sub-query selectivity combined with the distinction whether a triple pattern is supposed to be evaluated locally or at a remote SPARQL endpoint. In addition to join re-ordering we would like to make use of statistics about SPARQL endpoints in order to optimize queries even further. Hopefully the recent initiative called Vocabulary of Interlinked Datasets (http://community.linkeddata.org/MediaWiki/index.php?VoiD) will get to a point where it could be used for this purpose.
      t where it could be used for this purpose.)
    • FedX: Optimization Techniques for Federated Query Processing on Linked Data  + (While we focused on optimization technique
      While we focused on optimization techniques for conjunctive queries, namely basic graph patterns (BGPs), there is additional potential in developing novel, operator-specific optimization techniques for distributed settings (in particular for OPTIONAL queries), which we are planning to address in future work. In a future release, (remote) statistics (e.g., using VoID) can be incorporated for source selection and to further improve our join order algorithm.
      further improve our join order algorithm.)
    • Querying over Federated SPARQL Endpoints : A State of the Art Survey  + ({{{Future work}}})