35
U

Property:Has future work

From Openresearch
Revision as of 18:46, 21 March 2018 by Said (talk | contribs) (Created a property of type Has type::Text)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This is a property of type Text.

Pages using the property "Has future work"

Showing 25 pages using this property.

View (previous 25 | next 25) (20 | 50 | 100 | 250 | 500)

A
A Probabilistic-Logical Framework for Ontology Matching +The framework is not only useful for aligning concepts and properties but can also include instance matching. For this purpose, one would only need to add a hidden predicate modeling instance correspondences. The resulting matching approach would immediately benefit from probabilistic joint inference, taking into account the interdependencies between terminological and instance correspondences.  +
A Semantic Web Middleware for Virtual Data Integration on the Web +Other future work will be the support for DESCRIBE-queries and IRIs as subjects. In future, the mediator should also use an OWL-DL reasoner to infer additional types for subject nodes specified in the query pattern. Currently, types have to be explicitly specified for each BGP (more precisely for the first occurrence: the algorithm caches already known types). OWL-DL constraints like for example a qualified cardinality restriction on obs:byObserver with owl:allValuesFrom obs:Observer would allow the mediator to deduce types of other nodes in the query pattern.  +
A Survey of Current Link Discovery Frameworks +No Future work exists.  +
ANAPSID: An Adaptive Query Processing Engine for SPARQL Endpoints +In the future we plan to extend ANAPSID with more powerful and lightweight operators like Eddy and MJoin, which are able to route received responses through different operators, and adapt the execution to unpredictable delays by changing the order in which each data item is routed.  +
Accessing and Documenting Relational Databases through OWL Ontologies +No data available now.  +
Adaptive Integration of Distributed Semantic Web Data +Future work will aim to investigate other data sets with different characteristics and larger data sets. As the approach presented in this paper focuses on efficiently executing a specific kind of query, that of adaptively ordering multiple joins, further work will focus on optimising other kinds of queries and implementing support for more SPARQL query language features. Future work will also concentrate on investigating how the work can be applied in various domains.  +
AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies +No data available now.  +
Analysing Scholarly Communication Metadata of Computer Science Events +In further research, we aim to expand the analysis to other fields of science and to smaller events. Also, it is interesting to assess the impact of digitisation with regard to further scholarly communication means, such as journals (which are more important in fields other than computer science), workshops, funding calls and proposal applications as well as awards. Although large parts of our analysis methodology are already automated, we plan to further optimise the process so that analysis can be almost instantly generated from the OpenResearch data basis.  +
Avalanche: Putting the Spirit of the Web back into Semantic Web Querying +The Avalanche system has shown how a completely heterogeneous distributed query engine that makes no assumptions about data distribution could be implemented. The current approach does have a number of limitations. In particular, we need to better understand the employed objective functions for the planner, investigate if the requirements put on participating triple-stores are reasonable, explore if Avalanche can be changed to a stateless model, and empirically evaluate if the approach truly scales to large number of hosts. Here we discuss each of these issues in turn. The core optimization of the Avalanche system lies in its cost and utility function. The basic utility function only considers possible joins with no information regarding the probability of the respective join. The proposed utility extension UE estimates the join probability of two highly selective molecules. Although this improves the accuracy of the objective function, its limitation to highly selective molecules is often impractical, as many queries (such as our example query) combine highly selective molecules with non-selective ones. Hence, we need to find a probabilistic distributed join cardinality estimation for low selectivity molecules. One approach might be the usage of bloom-filter caches to store precomputed, “popular” estimates. Another might be investigating sampling techniques for distributed join estimation. In order to support Avalanche existing triple-stores should be able to: – report statistics: cardinalities, bloom filters, other future extensions – support the execution of distributed joins (common in distributed databases), which could be delegated to an intermediary but would be inefficient – share the same key space (can be URIs but would result in bandwidth intensive joins and merges) Whilst these requirements seem simple we need to investigate how complex these extensions of triple-stores are in practice. Even better would be an extension of the SPARQL standard with the above-mentioned operations, which we will attempt to propose. The current Avalanche process assumes that hosts keep partial results throughout plan execution to reduce the cost of local database operations and that result-views are kept for the duration of a query. This limits the number of queries a host can handle. We intend to investigate if a stateless approach is feasible. Note that the simple approach—the use of REST-full services—may not be applicable as the size of the state (i.e., the partial results) may be huge and overburden the available bandwidth. We designed Avalanche with the need for high scalability in mind. The core idea follows the principle of decentralization. It also supports asynchrony using asynchronous HTTP requests to avoid blocking, autonomy by delegating the coordination and execution of the distributed join/update/merge operations to the hosts, concurrency through the pipeline shown in Figure 1, symmetry by allowing each endpoint to act as the initiating Avalanche node for a query caller, and fault tolerance through a number of time-outs and stopping conditions. Nonetheless, an empirical evaluation of Avalanche with a large number of hosts is still missing—a non-trivial shortcoming (due to the lack of suitable, partitioned datasets and the significant experimental complexity) we intend to address in the near future.  +
B
Bringing Relational Databases into the Semantic Web: A Survey +We close this paper by mentioning some problems that have only been lightly touched upon by database to ontology mapping solutions as well as some aspects that need to be considered by future approaches. 1. Ontology-based data update. A lot of approaches mentioned offer SPARQL based access to the contents of the database. However, this access is unidirectional. Since the emergence of SPARQL Update that allows update operations on an RDF graph, the idea of issuing SPARQL Update requests that will be transformed to appropriate SQL statements and executed on the underlying relational database has become more and more popular. Some early work has already appeared in the OntoAccess prototype and the extensions on the D2RQ tool, D2RQ/Update 21 and D2RQ++. However, as SPARQL Update is still under devel- opment and its semantics is not yet well defined, there is some ambiguity regarding the transformation of some SPARQL Update statements. Moreover, only basic (relation-to-class and attribute-to-property) mappings have been investigated so far. The issue of updating relational data through SPARQL Update is similar to the classic database view update problem, therefore porting already proposed solutions would contribute significantly in dealing with this issue. 2. Mapping update. Database schemas and ontologies constantly evolve to suit the changing application and user needs. Therefore, established mappings between the two should also evolve, instead of being redefined or rediscovered from scratch. This issue is closely related to the previous one, since modifications in either participating model do not simply incur adaptations to the mapping but also cause some necessary changes to the other model as well. So far, only few solutions have been proposed for the case of the unidirectional propagation of database schema changes to a generated ontology and the consequent adaptation of the mapping. The inverse direction, i.e. modification of the database as a result of changes in the ontology has not been investigated thoroughly yet. On a practical note, both database trigger functions and mechanisms like the Link Maintenance Protocol (WOD-LMP) from the Silk framework could prove useful for solutions to this issue. 3. Generation of Linked Data. A fair number of approaches support vocabulary reuse, a factor that has always been important for the progress of the Semantic Web, while a few other approaches try to discover automatically the most suitable classes or properties from popular vocabularies that can be mapped to a given database schema. Nonetheless, these efforts are still not adequate for the generation of RDF graphs that can be smoothly integrated in the Linking Open Data (LOD) Cloud. For the generation of true Linked Data, the real world entities that database values represent should be identified and links between them should be established, in contrast with the majority of current methods, which translate database values to RDF literals. Lately, a few interesting tools that handle the transformation of spreadsheets to Linked RDF data by analyzing the content of spreadsheet tables have been presented, with the most notable examples being the RDF extension for Google Refine and T2LD. Techniques as the ones applied in these tools can certainly be adapted to the relational database paradigm. These aspects, together with the challenges enumerated in Section 7, mark the next steps for database to ontology mapping approaches. Although a lot of ground has been covered during the last decade, it looks like there is definitely some interesting road ahead in order to seamlessly integrate relational databases with the Semantic Web, turning it into reality at last.  +
C
Cross: an OWL wrapper for teasoning on relational databases +A first direction for further work would be to try and strengthen the theorem, to have an equivalence of OWL consistency with full legality, i.e. taking into account foreign keys. This could actually be done by using an expressive feature of OWL (the oneOf constructor, not mentioned in this paper), but would possibly make the reasoning intractable. Another solution would be to propose, in a similar way to finite model reasoning, an algorithm of closed world reasoning which would not be allowed to create individuals. We also want to get more experimental results for the Cross implementation. Preliminary results 7 are encouraging: the transformation of the schema of real database (127 tables, 869 columns, 132 unicity constraints, no foreign key) took around 1.5s; the resulting ontology was loaded in Pellet in about 9s, while reasoning took about 3s. Those results seem reasonable for a quite big schema. We now plan to experiment on the use cases presented in Section 6.3 with that database and a sample of other real databases.  +
D
D2RQ – Treating Non-RDF Databases as Virtual RDF Graphs +No future work exists.  +
DataMaster – a Plug-in for Importing Schemas and Data from Relational Databases into Protégé +No data available now.  +
Discovering and Maintaining Links on the Web of Data +Future work on Silk will focus on the following areas: We will implement further similarity metrics to support a broader range of linking use cases. To assist users in writing Silk-LSL specifications, machine learning techniques could be employed to adjust weightings or optimize the structure of the matching specification. Finally, we will evaluate the suitability of Silk for detecting duplicate entities within local datasets instead of using it to discover links between disparate RDF data sources. The value of the Web of Data rises and falls with the amount and the quality of links between data sources. We hope that Silk and other similar tools will help to strengthen the linkage between data sources and therefore contribute to the overall utility of the network.  +
F
FedX: Optimization Techniques for Federated Query Processing on Linked Data +While we focused on optimization techniques for conjunctive queries, namely basic graph patterns (BGPs), there is additional potential in developing novel, operator-specific optimization techniques for distributed settings (in particular for OPTIONAL queries), which we are planning to address in future work. In a future release, (remote) statistics (e.g., using VoID) can be incorporated for source selection and to further improve our join order algorithm.  +
From Relational Data to RDFS Models +No data available now.  +
I
Integration of Scholarly Communication Metadata using Knowledge Graphs +In the context of the OSCOSS project on Opening Scholarly Communication in the Social Sciences, the SCM-KG approach will be used for providing authors with precise and complete lists of references during the article writing process.  +
K
KnoFuss: A Comprehensive Architecture for Knowledge Fusion +-  +
L
LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data +We aim to explore the combination of LIMES with active learning strategies in a way, that a manual configuration of the tool becomes unnecessary. Instead, matching results will be computed quickly by using the exemplars in both the source and target knowledge bases. Subsequently, they will be presented to the user who will give feedback to the system by rating the quality of found matches. This feedback in turn will be employed for improving the matching configuration and to generate a revised list of matching suggestions to the user. This iterative process will be continued until a sufficiently high quality (in terms of precision and recall) of matches is reached.  +
LogMap: Logic-based and Scalable Ontology Matching +We are currently working on further optimizations, and in the near future, we are planning to integrate LogMap with a Protege-based front-end, such as the one implemented in our tool ContentMap.  +
O
Optimizing SPARQL Queries over Disparate RDF Data Sources through Distributed Semi-joins +We would like to further improve the query evaluation performance by introducing a distributed join-aware join reordering. We will make use of the current Sesame optimization techniques for local queries and add our own component which will be re-ordering joins according to their relative costs. The costs will be based on statistics taking into account a sub-query selectivity combined with the distinction whether a triple pattern is supposed to be evaluated locally or at a remote SPARQL endpoint. In addition to join re-ordering we would like to make use of statistics about SPARQL endpoints in order to optimize queries even further. Hopefully the recent initiative called Vocabulary of Interlinked Datasets (http://community.linkeddata.org/MediaWiki/index.php?VoiD) will get to a point where it could be used for this purpose.  +
Q
Querying Distributed RDF Data Sources with SPARQL +In further work, we plan to work on mapping and translation rules between the vocabularies used by different SPARQL endpoints. Also, we will investigate generalizing the query patterns that can be handled and blank nodes and identity relationships across graphs.  +
Querying over Federated SPARQL Endpoints : A State of the Art Survey +{{{Future work}}}  +
Querying the Web of Data with Graph Theory-based Techniques +Explore the co-reference issue in the Linked Data cloud. From the perspective of distributed SPARQL queries, this issue is getting worse as more data are published, and we plan to address this issue by using our Virtual Graph approach.  +
Querying the Web of Interlinked Datasets using VOID Descriptions +Developing a tool which extracts well-defined VOID descriptions of datasets, and by this means evaluating our approach is a required future work to confirm applicability of WoDQA on linked open data. Also, evaluating the analysis cost of WoDQA for a large VOID store will be possible when well-defined VOIDs are constructed.  +
Facts about "Has future work"
Has type
"Has type" is a predefined property that describes the datatype of a property and is provided by Semantic MediaWiki.
Text +