Search by property

Jump to: navigation, search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Has future work" with value "-". Since there have been only a few results, also nearby values are displayed.

Showing below up to 20 results starting with #1.

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)


    

List of results

    • Cross: an OWL wrapper for teasoning on relational databases  + (A first direction for further work would b
      A first direction for further work would be to try and strengthen the theorem, to have an equivalence of OWL consistency with full legality, i.e. taking into account foreign keys. This could actually be done by using an expressive feature of OWL (the oneOf constructor, not mentioned in this paper), but would possibly make the reasoning intractable. Another solution would be to propose, in a similar way to finite model reasoning, an algorithm of closed world reasoning which would not be allowed to create individuals. We also want to get more experimental results for the Cross implementation. Preliminary results 7 are encouraging: the transformation of the schema of real database (127 tables, 869 columns, 132 unicity constraints, no foreign key) took around 1.5s; the resulting ontology was loaded in Pellet in about 9s, while reasoning took about 3s. Those results seem reasonable for a quite big schema. We now plan to experiment on the use cases presented in Section 6.3 with that database and a sample of other real databases.
      base and a sample of other real databases.)
    • Relational.OWL - A Data and Schema Representation Format Based on OWL  + (A further extension for Relational.OWL cou
      A further extension for Relational.OWL could be a corresponding protocol extending the possibilities of Relational.OWL to particularly support data exchanges or replications. There we could employ the advantages of our knowledge representation technique for recurring problems occurring within such a data exchange process, e.g. identifying the same data items on remote databases. Although autonomously communicating databases in a metadata exchange are still more vision than reality, our model takes us one step further.
      lity, our model takes us one step further.)
    • SLINT: A Schema-Independent Linked Data Interlinking System  + (Although SLINT has good result on tested d
      Although SLINT has good result on tested datasets, it is not sufficient to evaluate the scalability of our system, which we consider as the current limiting point because of the used of weighted co-occurrence matrix. We will investigate about a solution for this issue in our next work. Besides, we also interested in automatic configuration for every threshold used in SLINT and improving SLINT into a novel cross-domain interlinking system.
      a novel cross-domain interlinking system.)
    • SERIMI – Resource Description Similarity, RDF Instance Matching and Interlinking  + (As future work, we intend to investigate h
      As future work, we intend to investigate how our model can be adjusted to consider partial string matching in the similarity function that we proposed, and to accommodate different score distribution metrics as the threshold for the parameter Also, we intend to evaluate this approach in different collections that may provide a more accurate reference alignment than the ones that we used in this work.
      t than the ones that we used in this work.)
    • SPLENDID: SPARQL Endpoint Federation Exploiting VOID Descriptions  + (As next steps, we plan to investigate whet
      As next steps, we plan to investigate whether VOID descriptions can easily be extended with more detailed statistics in order to allow for more accurate cardinality estimates and, thus, better query execution plans. On the other hand, the actual query execution has not yet been optimized in SPLENDID. Therefore, we plan to integrate optimization techniques as used in FedX. Moreover, the adoption of the SPARQL 1.1 federation extension will also allow for more efficient query execution.
      allow for more efficient query execution.)
    • Use of OWL and SWRL for Semantic Relational Database Translation  + (Currently, URIs returned by SBRD are uniqu
      Currently, URIs returned by SBRD are unique but generally not resolvable. We intend to address this issue in future versions by generating resolvable URIs and incorporating the best practices of the Linking Open Data initiative. To the best of our knowledge, we believe that our rules and their usage are consistent with the design goals of the DL Safe SWRL Rules task force4. Decidability is a critical aspect of our architecture and is therefore focused on features such as the use of Horn rules with unary and binary predicates. We will continue to monitor the task force’s progress and incorporate necessary modifications. The advantages of SWRL built-ins have also proven essential. It is our hope that they are addressed in the DL Safe task force and will be comparable to the built-ins provided by SWRL.
      parable to the built-ins provided by SWRL.)
    • Querying the Web of Interlinked Datasets using VOID Descriptions  + (Developing a tool which extracts well-defi
      Developing a tool which extracts well-defined VOID descriptions of datasets, and by this means evaluating our approach is a required future work to confirm applicability of WoDQA on linked open data. Also, evaluating the analysis cost of WoDQA for a large VOID store will be possible when well-defined VOIDs are constructed.
      e when well-defined VOIDs are constructed.)
    • Querying the Web of Data with Graph Theory-based Techniques  + (Explore the co-reference issue in the Linked Data cloud. From the perspective of distributed SPARQL queries, this issue is getting worse as more data are published, and we plan to address this issue by using our Virtual Graph approach.)
    • Unveiling the hidden bride: deep annotation for mapping and migrating legacy data to the Semantic Web  + (For the future, there is a long list of op
      For the future, there is a long list of open issues concerning deep annotation—from the more mundane, though important, ones (top) to far-reaching ones (bottom): (1) Granularity: So far we have only considered atomic database fields. For instance, one may find a string “Proceedings of the Eleventh International World Wide Web Conference, WWW2002, Honolulu, Hawaii, USA, 7–11 May 2002.” as the title of a book whereas one might rather be interested in separating this field into title, location, and date. (2) Automatic derivation of server-side Web page markup: A content management system like Zope could provide the means for automatically deriving server-side Web page markup for deep annotation. Thus, the database provider could be freed from any workload, while allowing for participation in the Semantic Web. Some steps in this direction are currently being pursued in the KAON CMS, which is based on Zope. (3) Other information structures: For now, we have built our deep annotation process on SQL and relational databases. Future schemes could exploit Xquery or an ontology-based query language. (4) Interlinkage: In the future deep annotations may even link to each other, creating a dynamic interconnected Semantic Web that allows translation between different servers. (5) Opening the possibility to directly query the database, certainly creates problems such as new possibilities for denial of service attacks. In fact, queries, e.g. ones that involve too many joins over large tables, may prove hazardous. Nevertheless, we see this rather as a challenge to be solved by clever schemes for CPU processing time (with the possibility that queries are not answered because the time allotted for one query to one user is up) than for a complete “no go.” We believe that these options make deep annotation a rather intriguing scheme on which a considerable part of the Semantic Web might be built.
      e part of the Semantic Web might be built.)
    • Updating Relational Data via SPARQL/Update  + (Future work is planned for various aspects
      Future work is planned for various aspects of OntoAccess. Further research needs to be done on bridging the conceptual gap between RDBs and the Semantic Web. Ontology- based write access to the relational data creates completely new challenges on this topic with respect to read-only approaches. The presence of schema constraints in the database can lead to the rejection of update requests that would otherwise be accepted by a native triple store. A feedback protocol that provides semantically rich information about the cause of a rejection and possible directions for improvement plays a major role in bridging the gap. Other database constraints such as assertions have to be evaluated as well to see if they can reasonably be supported in the mapping. Also, a more formal definition of the mapping language will be provided. Furthermore, we will extend our prototype implementation to support the SPARQL/Update MODIFY operation, SPARQL queries, and the just mentioned feedback protocol.
      and the just mentioned feedback protocol.)
    • Discovering and Maintaining Links on the Web of Data  + (Future work on Silk will focus on the foll
      Future work on Silk will focus on the following areas: We will implement further similarity metrics to support a broader range of linking use cases. To assist users in writing Silk-LSL specifications, machine learning techniques could be employed to adjust weightings or optimize the structure of the matching specification. Finally, we will evaluate the suitability of Silk for detecting duplicate entities within local datasets instead of using it to discover links between disparate RDF data sources. The value of the Web of Data rises and falls with the amount and the quality of links between data sources. We hope that Silk and other similar tools will help to strengthen the linkage between data sources and therefore contribute to the overall utility of the network.
      ute to the overall utility of the network.)
    • Adaptive Integration of Distributed Semantic Web Data  + (Future work will aim to investigate other
      Future work will aim to investigate other data sets with different characteristics and larger data sets. As the approach presented in this paper focuses on efficiently executing a specific kind of query, that of adaptively ordering multiple joins, further work will focus on optimising other kinds of queries and implementing support for more SPARQL query language features. Future work will also concentrate on investigating how the work can be applied in various domains.
      he work can be applied in various domains.)
    • Analysing Scholarly Communication Metadata of Computer Science Events  + (In further research, we aim to expand the
      In further research, we aim to expand the analysis to other fields of science and to smaller events. Also, it is interesting to assess the impact of digitisation with regard to further scholarly communication means, such as journals (which are more important in fields other than computer science), workshops, funding calls and proposal applications as well as awards. Although large parts of our analysis methodology are already automated, we plan to further optimise the process so that analysis can be almost instantly generated from the OpenResearch data basis.
      enerated from the OpenResearch data basis.)
    • Querying Distributed RDF Data Sources with SPARQL  + (In further work, we plan to work on mappin
      In further work, we plan to work on mapping and translation rules between the vocabularies used by different SPARQL endpoints. Also, we will investigate generalizing the query patterns that can be handled and blank nodes and identity relationships across graphs.
      and identity relationships across graphs.)
    • Integration of Scholarly Communication Metadata using Knowledge Graphs  + (In the context of the OSCOSS project on Opening Scholarly Communication in the Social Sciences, the SCM-KG approach will be used for providing authors with precise and complete lists of references during the article writing process.)
    • ANAPSID: An Adaptive Query Processing Engine for SPARQL Endpoints  + (In the future we plan to extend ANAPSID wi
      In the future we plan to extend ANAPSID with more powerful and lightweight operators like Eddy and MJoin, which are able to route received responses through different operators, and adapt the execution to unpredictable delays by changing the order in which each data item is routed.
      e order in which each data item is routed.)
    • Towards a Knowledge Graph Representing Research Findings by Semantifying Survey Articles  + (Integrating our methodology with the proce
      Integrating our methodology with the procedure of publishing survey articles can help to create a paradigm shift. We plan to further extend the ontology to cover other research methodologies and fields. For a more robust implementation of the proposed approach, we are planning to use and significantly expand the OpenResearch.org platform and a user-friendly SPARQL auto-generation services for accessing metadata analysis for non-expert users. More comprehensive evaluation of the services will be done after the implementation of the curation, exploration and discovery services. In addition, our intention is to develop and foster a living community around OpenResearch.org and SemSur, to extend the ontology and to ingest metadata to cover other research fields.
      t metadata to cover other research fields.)
    • A Survey of Current Link Discovery Frameworks  + (No Future work exists.)
    • AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies  + (No data available now.)