Subscribe: Planet RDF
Added By: Feedage Forager Feedage Grade B rated
Language: English
automatic mappings  city  data  full screen  made  mappings tables  open extraction  raspberry  release  screen  tables  web  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet RDF

Planet RDF

It's triples all the way down

Published: 2017-01-20T13:10:52.96Z


AKSW Colloquium, 23.01.2017, Automatic Mappings of Tables to Knowledge Graphs and Open Table Extraction


Automatic Mappings of Tables to Knowledge Graphs and Open Table Extraction On the upcoming colloquium on 23.01.2017, Ivan Ermilov will present his work on automatic mappings of tables to knowledge graphs, which was published as  TAIPAN: Automatic Property Mapping for Tabular Data  on EKAW’2016 conference, as well as extension of this work including: Open Table Extraction (OTE) approach, i.e. how to generate meaningful information from a big corpus of tables. How to benchmark OTE and which benchmarks are available. OTE use cases and applications.   About the AKSW Colloquium This event is part of a series of events about Semantic Web ...

Mega-City One: Smart City


“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and Internet of Things (IoT) solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services…ICT allows city officials to interact directly with the community and the city infrastructure and to monitor what is happening in the city, how the city is evolving, and how to enable a better quality of life. Through ...

A river of research, not news


I already hate the phrase “fake news”. We have better words to describe lies, disinformation, propaganda and slander, so lets just use those. While the phrase “fake news” might

PRESS RELEASE: “HOBBIT so far.” is now available


The latest release informs about the conferences our team attended in 2016 as well as about the published blogposts. Furthermore it gives a short analysis about the survey by which we are able to verify requirements of our benchmarks and the new HOBBIT plattform. Last but not least the release gives a short outlook to our plans in 2017 including the founding of the HOBBIT association. Have a look at the whole press release on the HOBBIT website .

Read Write Web — Q4 Summary — 2016


Summary An eventful 2016 draws to a close, steady progress has been made on standards, implementations and apps, for reading and writing to the web.  Some press coverage is starting to emerge, with pieces on a decentralized web and putting data back in the hands of owners .  Also published was a nice review and predictions for trends in 2017 on the (semantic) web. Linked Data Notifications has become a Candidate Recommendation, and expected to become a full recommendation next year, as the working group has been extended slightly.  Data Best Practices on the Web is also now a Proposed ...

Moving to Kolab from Google mail (“G-Suite”)


I had a seven year old google ‘business’ account (now called ‘G-Suite’, previously Google Apps, I think) from back when it was free. Danbri put me onto it, and it was brilliant because you can use your own domain with it. It’s been very useful, but I’ve been thinking of moving to a paid-for service for a while.

A simple Raspberry Pi-based picture frame using Flickr


I made this Raspberry Pi picture frame – initially with a screen – as a present for my parents for their wedding anniversary. After user testing, I realised that what they really wanted was a tiny version that they could plug into their TV, so I recently made a Pi Zero version to do that. It uses a Raspberry Pi 3 or Pi Zero with full-screen Chromium. I’ve used Flickr as a backend: I made a new account and used their handy upload-by-email function (which you can set to make uploads private) so that all members of the family can send pictures to it. I initially assumed ...

LIRC on a Raspberry Pi for a silver Apple TV remote


The idea here is to control a full-screen chromium webpage via a simple remote control (I’m using it for full-screen TV-like prototypes). It’s very straightforward really, but I couldn’t find the right kind of summary of

Twitter for ESP 8266


I’ve been using the

A modern neural network in 11 lines of Python


And a great learning tool for understanding neural nets.

4th Big Data Europe Plenary at Leipzig University


The meeting, hosted by our partner InfAI e. V. , took place on the 14th to the 15th of December at the University of Leipzig . The 29 attendees in total, including 15 partners, discussed and reviewed the progress of all work packages in 2016 and planned the activities and workshops taking place in the next six months. On the second day we talked about several societal challenge pilots in the fields of AgroKnow, transport, security etc. It’s been the last plenary for this year and we thank everybody for their work in 2016. Big Data Europa and our partners ...

University of Edinburgh joins DCMI as Institutional Member<


2016-12-15, The DCMI Governing Board is pleased to announce that University of Edinburgh has joined DCMI as an Institutional Member. The University of Edinburgh is a world-leading centre of academic excellence with the mission to create, disseminate and curate knowledge. As a great civic university, Edinburgh especially values its intellectual and economic relationship with the Scottish community that forms its base and provides the foundation from which it will continue to look to the widest international horizons, enriching both itself and Scotland. Alasdair MacDonald with the Edinburgh University Library will represent the University on the DCMI Governing Board. For information ...

SANSA 0.1 (Semantic Analytics Stack) Released


Dear all, The Smart Data Analytics group  /AKSW are very happy to announce SANSA 0.1 – the initial release of the Scalable Semantic Analytics Stack. SANSA combines distributed computing and semantic technologies in order to allow powerful machine learning, inference and querying capabilities for large knowledge graphs. Website: GitHub: Download: ChangeLog: You can find the FAQ and usage examples at . The following features are currently supported by SANSA: Support for reading and writing RDF files in N-Triples format Support for reading OWL files in various standard formats Querying and partitioning based on Sparqlify Support ...

AKSW wins award for Best Resources Paper at ISWC 2016 in Japan


Our paper, “ LODStats: The Data Web Census Dataset ”, won the award for Best Resources Paper at the recent conference in Kobe/Japan, which was the premier international forum for Semantic Web and Linked Data Community. The paper presents the LODStats dataset, which provides a comprehensive picture of the current state of a significant part of the Data Web. Congrats to  Ivan Ermilov , Jens Lehmann , Michael Martin and Sören Auer . Please find the complete list of winners here .  

PhD Proposal: Ankur Padia, Dealing with Dubious Facts in Knowledge Graphs


Tweet Dissertation Proposal Dealing with Dubious Facts in Knowledge Graphs Ankur Padia 1:00-3:00pm Wednesday, 30 November 2016, ITE 325b, UMBC Knowledge graphs are structured representations of facts where nodes are real-world entities or events and edges are the associations among the pair of entities. Knowledge graphs can be constructed using automatic or manual techniques. Manual techniques construct high quality knowledge graphs but are expensive, time consuming and not scalable. Hence, automatic information extraction techniques are used to create scalable knowledge graphs but the extracted information can be of poor quality due to the presence of dubious facts. An extracted fact ...

AKSW Colloquium, 28.11.2016, NED using PBOH + Large-Scale Learning of Relation-Extraction Rules.


In the upcoming Colloquium, November the 28th at 3 PM, two papers will be presented: Probabilistic Bag-Of-Hyperlinks Model for Entity Linking Diego Moussallem will discuss the paper “Probabilistic Bag-Of-Hyperlinks Model for Entity Linking” by Octavian-Eugen Ganea et. al. which was accepted at WWW 2016. Abstract :  Many fundamental problems in natural language processing rely on determining what entities appear in a given text. Commonly referenced as entity linking, this step is a fundamental component of many NLP tasks such as text understanding, automatic summarization, semantic search or machine translation. Name ambiguity, word polysemy, context dependencies and a heavy-tailed distribution of entities contribute to ...

Leveraging KBpedia Aspects To Generate Training Sets Automatically


In previous articles I have covered multiple ways to create training corpuses for unsupervised learning and positive and negative training sets for supervised learning 1 , 2 , 3 using Cognonto and KBpedia. Different structures inherent to a knowledge graph like KBpedia can lead to quite different corpuses and sets. Each of these corpuses or sets may yield different predictive powers depending on the task at hand. So far we have covered two ways to leverage the KBpedia Knowledge Graph to automatically create positive and negative training corpuses: Using the links that exist between each KBpedia reference concept and their ...

Dynamic Machine Learning Using the KBpedia Knowledge Graph – Part 2


In the first part of this series we found the good hyperparameters for a single linear SVM classifier. In part 2, we will try another technique to improve the performance of the system: ensemble learning. So far, we already reached 95% of accuracy with some tweaking the hyperparameters and the training corpuses but the F1 score is still around ~70% with the full gold standard which can be improved. There are also situations when precision should be nearly perfect (because false positives are really not acceptable) or when the recall should be optimized. Here we will try to improve this ...

Dynamic Machine Learning Using the KBpedia Knowledge Graph – Part 1


In my previous blog post, Create a Domain Text Classifier Using Cognonto , I explained how one can use the KBpedia Knowledge Graph to automatically create positive and negative training corpuses for different machine learning tasks. I explained how SVM classifiers could be trained and used to check if an input text belongs to the defined domain or not. This article is the first of two articles.In first part I will extend on this idea to explain how the KBpedia Knowledge Graph can be used, along with other machine learning techniques, to cope with different situations and use cases. I ...

Triplifying a real dictionary


The Linked Data Lexicography for High-End Language Technology (LDL4HELTA) project was started in cooperation between  Semantic Web Company  (SWC) and  K Dictionaries . LDL4HELTA combines lexicography and Language Technology with semantic technologies and Linked (Open) Data mechanisms and technologies. One of the implementation steps of the project is to create a language graph from the dictionary data. The input data, described further, is a Spanish dictionary core translated into multiple languages and available in XML format. This data should be triplified (which means to be converted to RDF –  Resource Description Framework ) for several purposes, including to enrich it with ...

Accepted paper in AAAI 2017


Hello Community! We are very pleased to announce that our paper “Radon– Rapid Discovery of Topological Relations” was accepted for presentation at the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) , which will be held in February 4–9 at the Hilton San Francisco, San Francisco, California, USA. In more detail, we will present the following paper:  “Radon– Rapid Discovery of Topological Relations” Mohamed Ahmed Sherif , Kevin Dreßler , Panayiotis Smeros , and Axel-Cyrille Ngonga Ngomo Abstract. Datasets containing geo-spatial resources are increasingly being represented according to the Linked Data principles. Several time-efficient approaches for discovering links between RDF resources ...

Pulling RDF out of MySQL


With a command line option and a very short stylesheet.

SUB Göttingen joins DCMI as Institutional Member


2016-11-11, DCMI is pleased to announce that Göttingen State and University Library (SUB Göttingen) has joined DCMI as an Institutional Member. SUB Göttingen is one of most important research libraries in Germany, plays a leading role in a large number of national and international projects involving the optimization of literature and information provision and the establishment and development of digital research and information infrastructures. Its scope of activities include the cooperative development of a Germany-wide service infrastructure for the acquisition, licensing and provision of electronic resources; the coordination of large-scale joint research projects for developing research infrastructures in the humanities ...

Donate to the commons this holiday season


Holiday season is nearly upon us. Donating to a charity is an alternative form of gift giving that shows you care, whilst directing your money towards helping those that need it. There are a lot of great and deserving causes you can support, and I’m certainly not going to tell you where you should donate your money. But I’ve been thinking about the various ways in which I can support projects that I care about. There are a lot of them as it turns out. And it occurred to me that I could ask friends and family who might want to buy me a gift to ...

The practice of open data


Open data is data that anyone can access, use and share. Open data is the result of several processes. The most obvious one is the release process that results in data being made available for reuse and sharing. But there are other processes that may take place before that open data is made available: collecting and curating a dataset; running it through quality checks; or ensuring that data has been properly anonymised. There are also processes that happen after data has been published. Providing support to users, for example. Or dealing with error reports or service issues with an API ...

Building and Maintaining the KBpedia Knowledge Graph


The Cognonto demo is powered by an extensive knowledge graph called the KBpedia Knowledge Graph, as organized according to the KBpedia Knowledge Ontology (KKO). KBpedia is used for all kinds of tasks, some of which are demonstrated by the Cognonto use cases . KBpedia powers dataset linkage and mapping tools, machine learning training workflows, entity and concept extractions, category and topic tagging, etc. The KBpedia Knowledge Graph is a structure of more than 39,000 reference concepts linked to 6 major knowledge bases and 20 popular ontologies in use across the Web. Unlike other knowledge graphs that analyze big corpuses of ...

Discogs: a business based on public domain data


When I’m discussing business models around open data I regularly refer to a few different examples. Not all of these have well developed case studies, so I thought I’d start trying to capture them here. In this first write-up I’m going to look at

Machine learning links


[work in progress – I’m updating it gradually] Machine Learning

Checking Fact Checkers


As of last month