Faculty of Computer Science and Engineering

Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/5

The Faculty of Computer Science and Engineering (FCSE) within UKIM is the largest and most prestigious faculty in the field of computer science and technologies in Macedonia, and among the largest faculties in that field in the region. The FCSE teaching staff consists of 50 professors and 30 associates. These include many “best in field” personnel, such as the most referenced scientists in Macedonia and the most influential professors in the ICT industry in the Republic of Macedonia.

Browse

Search Results

Now showing 1 - 10 of 10
  • Some of the metrics are blocked by your 
    Item type:Publication,
    RDFGraphGen: An RDF Graph Generator Based on SHACL Shapes
    (Springer Nature (Singapore), 2026-04-01)
    ;
    Vecovska, Marija
    ;
    Jakubowski, Maxime
    ;
    Hose, Katja
    Developing and testing modern RDF-based applications often requires access to RDF datasets with certain characteristics. Unfortunately, it is very difficult to publicly find domain-specific knowledge graphs that conform to a particular set of characteristics. Hence, in this paper we propose RDFGraphGen, an open-source RDF graph generator that uses characteristics provided in the form of SHACL (Shapes Constraint Language) shapes to generate synthetic RDF graphs. RDFGraphGen is domain-agnostic, with configurable graph structure, value constraints, and distributions. It also comes with a number of predefined values for popular schema.org classes and properties, for more realistic graphs. Our results show that RDFGraphGen is scalable and can generate small, medium, and large RDF graphs in any domain.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    RDFGraphGen: A Synthetic RDF Graph Generator based on SHACL Constraints
    (2024-07-25)
    Marija Vecovska
    ;
    Milos Jovanovik
    This paper introduces RDFGraphGen, a general-purpose, domain-independent generator of synthetic RDF graphs based on SHACL constraints. The Shapes Constraint Language (SHACL) is a W3C standard which specifies ways to validate data in RDF graphs, by defining constraining shapes. However, even though the main purpose of SHACL is validation of existing RDF data, in order to solve the problem with the lack of available RDF datasets in multiple RDF-based application development processes, we envisioned and implemented a reverse role for SHACL: we use SHACL shape definitions as a starting point to generate synthetic data for an RDF graph. The generation process involves extracting the constraints from the SHACL shapes, converting the specified constraints into rules, and then generating artificial data for a predefined number of RDF entities, based on these rules. The purpose of RDFGraphGen is the generation of small, medium or large RDF knowledge graphs for the purpose of benchmarking, testing, quality control, training and other similar purposes for applications from the RDF, Linked Data and Semantic Web domain. RDFGraphGen is open-source and is available as a ready-to-use Python package.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    MOCHA 2017 as a Challenge for Virtuoso
    (Springer International Publishing, 2017-10)
    Spasić, Mirko
    ;
    The Mighty Storage Challenge (MOCHA) aims to test the performance of solutions for SPARQL processing, in several aspects relevant for modern Linked Data applications. Virtuoso, by OpenLink Software, is a modern enterprise-grade solution for data access, integration, and relational database management, which provides a scalable RDF Quad Store. In this paper, we present a short overview of Virtuoso with a focus on RDF triple storage and SPARQL query execution. Furthermore, we showcase the final results of the MOCHA 2017 challenge and its tasks, along with a comparison between the performance of our system and the other participating systems.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    MOCHA2017: The Mighty Storage Challenge at ESWC 2017
    (Springer International Publishing, 2017-10)
    Georgala, Kleanthi
    ;
    Spasić, Mirko
    ;
    ;
    Petzka, Henning
    ;
    Röder, Michael
    The aim of the Mighty Storage Challenge (MOCHA) at ESWC 2017 was to test the performance of solutions for SPARQL processing in aspects that are relevant for modern applications. These include ingesting data, answering queries on large datasets and serving as backend for applications driven by Linked Data. The challenge tested the systems against data derived from real applications and with realistic loads. An emphasis was put on dealing with data in form of streams or updates.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Authorization Proxy for SPARQL Endpoints
    (Springer International Publishing, 2017-09)
    ;
    A large number of emerging services expose their data using various Application Programming Interfaces (APIs). Consuming and fusing data form various providers is a challenging task, since separate client implementation is usually required for each API. The Semantic Web provides a set of standards and mechanisms for unifying data representation on the Web, as well as means of uniform access via its query language – SPARQL. However, the lack of data protection mechanisms for the SPARQL query language and its HTTP-based data access protocol might be the main reason why it is not widely accepted as a data exchange and linking mechanism. This paper presents an authorization proxy that solves this problem using query interception and rewriting. For a given client, it solely returns the permitted data for the requested query, defined via a flexible policy language that combines the RDF and SPARQL standards for policy definition.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Benchmarking Virtuoso 8 at the Mighty Storage Challenge 2018: Challenge Results
    (Springer International Publishing, 2018-10)
    ;
    Spasić, Mirko
    Following the success of Virtuoso at last year’s Mighty Storage Challenge - MOCHA 2017, we decided to participate once again and test the latest Virtuoso version against the new tasks which comprise the MOCHA 2018 challenge. The aim of the challenge is to test the performance of solutions for SPARQL processing in aspects relevant for modern applications: ingesting data, answering queries on large datasets and serving as backend for applications driven by Linked Data. The challenge tests the systems against data derived from real applications and with realistic loads, with an emphasis on dealing with changing data in the form of streams or updates. Virtuoso, by OpenLink Software, is a modern enterprise-grade solution for data access, integration, and relational database management, which provides a scalable RDF Quad Store. In this paper, we present the final challenge results from MOCHA 2018 for Virtuoso v8.0, compared to the other participating systems. Based on these results, Virtuoso v8.0 was declared as the overall winner of MOCHA 2018.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    MOCHA2018: The Mighty Storage Challenge at ESWC 2018
    (Springer International Publishing, 2018-10)
    Georgala, Kleanthi
    ;
    Spasić, Mirko
    ;
    ;
    Papakonstantinou, Vassilis
    ;
    Stadler, Claus
    The aim of the Mighty Storage Challenge (MOCHA) at ESWC 2018 was to test the performance of solutions for SPARQL processing in aspects that are relevant for modern applications. These include ingesting data, answering queries on large datasets and serving as backend for applications driven by Linked Data. The challenge tested the systems against data derived from real applications and with realistic loads. An emphasis was put on dealing with data in form of streams or updates.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Semantic Web and Data Science Integration Using Computational Books
    (Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Republic of North Macedonia, 2021-05)
    Mileski, Dimitar
    ;
    ;
    This paper presents the architecture for the development of web applications for exploring semantic knowledge graphs through parameterized interactive visualizations. The web interface and the interactive parameterized visualizations, in the form of a computational book, provide a way in which knowledge graphs can be explored. An important part of using this approach for building interactive web visualizations is that we can substitute the knowledge graph entities with other entities within the existing interactive visualizations, execute commands in a web-based environment, and get the same visualization for the new entities. With this architecture, various applications for interactive visualization of knowledge graphs can be developed, which can also stimulate the interest to explore the graph and its entities. We also present a publicly available open source use-case that is built using the concepts discussed in this paper.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    A GeoSPARQL Compliance Benchmark
    (MDPI, 2021-07-16)
    ;
    Homburg, Timo
    ;
    Spasić, Mirko
    GeoSPARQL is an important standard for the geospatial linked data community, given that it defines a vocabulary for representing geospatial data in RDF, defines an extension to SPARQL for processing geospatial data, and provides support for both qualitative and quantitative spatial reasoning. However, what the community is missing is a comprehensive and objective way to measure the extent of GeoSPARQL support in GeoSPARQL-enabled RDF triplestores. To fill this gap, we developed the GeoSPARQL compliance benchmark. We propose a series of tests that check for the compliance of RDF triplestores with the GeoSPARQL standard, in order to test how many of the requirements outlined in the standard a tested system supports. This topic is of concern because the support of GeoSPARQL varies greatly between different triplestore implementations, and the extent of support is of great importance for different users. In order to showcase the benchmark and its applicability, we present a comparison of the benchmark results of several triplestores, providing an insight into their current GeoSPARQL support and the overall GeoSPARQL support in the geospatial linked data domain.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Transforming Geospatial RDF Data into GeoSPARQL-Compliant Data: A Case of Traffic Data
    (Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Republic of North Macedonia, 2019-05)
    ;
    Spasić, Mirko
    Geospatial RDF datasets have a tendency to use latitude and longitude properties to denote the geographic location of the entities described within them. On the other hand, geographic information systems prefer the use of WKT and GML geometries when working with geospatial data. In this paper, we present a process of RDF data transformation which produces a GeoSPARQL-compliant dataset, using an RDF geospatial dataset with traffic data as a starting point. The traffic is comprised of vehicle traces, which consist of numerous points with specific latitude and longitude values. With our transformations, we enable querying of the dataset with GeoSPARQL extensions, which can be used to feed a GIS solution.