Files that you'll need (data & code)
On the Wikipedia page click edit (above the table), copy everything and paste in .txt file, or use this one "company_revenue_data.txt"
(NOTE: Use the ready file, the table on Wikipedia keeps changing making the commands in Step 3 unusable)
In GraphDB -> Import -> Tabular (OntoRefine) select your file
in Line-based text files:
- ignore first 18 lines
- parse every 3 lines into one row
click create project
In undo/redo select apply
copy the list of commands from file "company_revenue_Ontorefine_Commands.txt"
copy the OntoRefine sparql endpoint and put it between <> after SERVICE in "company_revenue_insert_query.txt"
create a repository (e.g. companies, base URI http://example.org/companies/)
connect to the repository
go to SPARQL -> paste your code and run it (code is in "company_revenue_insert_query.txt")
(optional) use spif functions to generate resources for the CEOs and country codes from the literals matching ?company :CEO ?CEO; :CountryCode ?countryCode.
- see spif function in "company_revenue_insert_query.txt" where companyID URIs are generated using BIND(spif...))