Faculty of Computer Science and Engineering
Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/5
The Faculty of Computer Science and Engineering (FCSE) within UKIM is the largest and most prestigious faculty in the field of computer science and technologies in Macedonia, and among the largest
faculties in that field in the region.
The FCSE teaching staff consists of 50 professors and 30 associates. These include many “best in field” personnel, such as the most referenced scientists in Macedonia and the most influential professors in the ICT industry in the Republic of Macedonia.
Browse
4 results
Search Results
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Linked Data Application Development Methodology(Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje, 2016-11-05)The vast amount of data available over the distributed infrastructure of the Web has initiated the development of techniques for their representation, storage and usage. One of these techniques is the Linked Data paradigm, which aims to provide unified practices for publishing and contextually interlinking data on the Web, by using the World Wide Web Consortium (W3C) standards and the Semantic Web technologies. This approach enables the transformation of the Web from a web of documents, to a web of data. With it, the Web transforms into a distributed network of data which can be used by software agents and machines. The interlinked nature of the distributed datasets enables the creation of advanced use-case scenarios for the end users and their applications , scenarios previously unavailable over isolated data silos. This creates opportunities for generating new business values in the industry. The adoption of the Linked Data principles by data publishers from the research community and the industry has led to the creation of the Linked Open Data (LOD) Cloud, a vast collection of interlinked data published on and accessible via the existing infrastructure of the Web. The experience in creating these Linked Data datasets has led to the development of a few methodo-logies for transforming and publishing Linked Data. However, even though these methodologies cover the process of modeling, transforming / generating and publishing Linked Data, they do not consider reuse of the steps from the life-cycle. This results in separate and independent efforts to generate Linked Data within a given domain, which always go through the entire set of life-cycle steps. In this PhD thesis, based on our experience with generating Linked Data in various domains and based on the existing Linked Data methodologies, we define a new Linked Data methodology with a focus on reuse. It consists of five steps which encompass the tasks of studying the domain, modeling the data, transforming the data, publishing it and exploiting it. In each of the steps, the methodology provides guidance to data publishers on defining reusable components in the form of tools, schemas and services, for the given domain. With this, future Linked Data publishers in the domain would be able to reuse these components to go through the life-cycle steps in a more efficient and productive manner. With the reuse of schemas from the domain, the resulting Linked Data dataset will be compatible and aligned with other datasets generated by reusing the same components, which additionally leverages the value of the datasets. This approach aims to encourage data publishers to generate high-quality, aligned Linked Data datasets from various domains, leading to further growth of the number of datasets on the LOD Cloud, their quality and the exploitation scenarios. With the emergence of data-driven scientific fields, such as Data Science, creating and publishing high-quality Linked Data datasets on the Web is becoming even more important, as it provides an open dataspace built on existing Web standards. Such a dataspace enables data scientists to make data analytics over the cleaned, structured and aligned data in it, in order to produce new knowledge and introduce new value in a given domain. As the Linked Data principles are also applicable within closed environments over proprietary data, the same methods and approaches are applicable in the enterprise domain as well. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Semantic Web and Data Science Integration Using Computational Books(Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Republic of North Macedonia, 2021-05) ;Mileski, Dimitar; This paper presents the architecture for the development of web applications for exploring semantic knowledge graphs through parameterized interactive visualizations. The web interface and the interactive parameterized visualizations, in the form of a computational book, provide a way in which knowledge graphs can be explored. An important part of using this approach for building interactive web visualizations is that we can substitute the knowledge graph entities with other entities within the existing interactive visualizations, execute commands in a web-based environment, and get the same visualization for the new entities. With this architecture, various applications for interactive visualization of knowledge graphs can be developed, which can also stimulate the interest to explore the graph and its entities. We also present a publicly available open source use-case that is built using the concepts discussed in this paper. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Delegated Attribute-Based Access Control (DABAC) for Contextual Linked Data(Information Society of Serbia - ISOS, 2019); ; ; Security is an inevitable part of every information system. It is a cross-cutting concern that affects every part of the system. There is a constant trade-off between a secured system and convenient security management, and this management gets more demanding when the permissions are context dependent. The delegation of authorization is one way to make this process more convenient, by including and allowing multiple individuals to contribute. We provide a solution for combining multiple security rules such that data owners can delegate a part of their permissions over their data, and can validate them over all allowed data during the security permissions design process. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Transforming Geospatial RDF Data into GeoSPARQL-Compliant Data: A Case of Traffic Data(Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Republic of North Macedonia, 2019-05); Spasić, MirkoGeospatial RDF datasets have a tendency to use latitude and longitude properties to denote the geographic location of the entities described within them. On the other hand, geographic information systems prefer the use of WKT and GML geometries when working with geospatial data. In this paper, we present a process of RDF data transformation which produces a GeoSPARQL-compliant dataset, using an RDF geospatial dataset with traffic data as a starting point. The traffic is comprised of vehicle traces, which consist of numerous points with specific latitude and longitude values. With our transformations, we enable querying of the dataset with GeoSPARQL extensions, which can be used to feed a GIS solution.
