Faculty of Computer Science and Engineering

Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/5

The Faculty of Computer Science and Engineering (FCSE) within UKIM is the largest and most prestigious faculty in the field of computer science and technologies in Macedonia, and among the largest faculties in that field in the region. The FCSE teaching staff consists of 50 professors and 30 associates. These include many “best in field” personnel, such as the most referenced scientists in Macedonia and the most influential professors in the ICT industry in the Republic of Macedonia.

Browse

Search Results

Now showing 1 - 10 of 59
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Technological solutions for older people with Alzheimer's disease
    (Bentham Science Publishers, 2018-09-01)
    Maresova, Petra
    ;
    Tomsone, Signe
    ;
    ;
    Madureira, Joana
    ;
    Mendes, Ana
    In the nineties, numerous studies began to highlight the problem of the increasing number of people with Alzheimer’s disease in developed countries, especially in the context of demographic progress. At the same time, the 21st century is typical of the development of advanced technologies that penetrate all areas of human life. Digital devices, sensors, and intelligent applications are tools that can help seniors and allow better communication and control of their caregivers. The aim of the paper is to provide an up-to-date summary of the use of technological solutions for improving health and safety for people with Alzheimer’s disease. Firstly, the problems and needs of senior citizens with Alzheimer’s disease (AD) and their caregivers are specified. Secondly, a scoping review is performed regarding the technological solutions suggested to assist this specific group of patients. Works obtained from the following libraries are used in this scoping review: Web of Science, PubMed, Springer, ACM and IEEE Xplore. Four independent reviewers screened the identified records and selected relevant articles which were published in the period from 2007 to 2018. A total of 6,705 publications were selected. In all, 128 full papers were screened. Results obtained from the relevant studies were furthermore divided into the following categories according to the type and use of technologies: devices, processing, and activity recognition. The leading technological solution in the category of devices are wearables and ambient noninvasive sensors. The introduction and utilization of these technologies, however, bring about challenges in acceptability, durability, ease of use, communication, and power requirements. Furthermore, it needs to be pointed out that these technological solutions should be based on open standards.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    End-users’ AAL and ELE service scenarios in smart personal environments
    (2017-11-24)
    Autexier, Serge
    ;
    Goleva, Rossitza
    ;
    M Garcia, Nuno
    ;
    Stainov, Rumen
    ;
    Ganchev, Ivan
    This chapter presents results from ambient assisted living (AAL) and enhanced living environment (ELE) service identification and testing performed within an AAL lab. Possible end-user testing groups and scenarios of ‘AAL as a service’ and ‘ELE as a service’ (ELEaaS) platforms are described and specified. Firstly, protocols and services classifications are presented according to the end-user-specific requirements from communication and information point of view as the chapter aims to show how end-users, caregivers and service providers can be prepared for the challenges of the market. The aim of the test group is to verify and validate the platforms and services for the ELE created, integrated, described and specified. The testing is based on the platform technology and depends on the user requirements’ analysis and ongoing work throughout use-cases. Existing living labs experience has been used and enriched by customized information and communication services known from the information and communication technologies sector. Description of the ELEaaS is done in general terms along with the testing needed to be performed against the general type of provided functionalities. Furthermore, customization of the services, applicability to the needs of all stakeholders, flexibility for data exchange, integration and interoperability between different versions and types of platforms need to be also verified.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    CoviHealth: Novel approach of a mobile application for nutrition and physical activity management for teenagers
    (2019-09-25)
    Vanessa Villasana, María
    ;
    Miguel Pires, Ivan
    ;
    Sá, Juliana
    ;
    M Garcia, Nuno
    ;
    Pombo, Nuno
    The increasing number of teenagers with obesity and sedentary lifestyle is related to the poor habits of diet and physical activity. There is a large diversity of mobile applications related to diet control and physical activity, mainly directed to adults and without any medical control. CoviHealth project consists of the implementation of a mobile application for young people to promote healthy dietary habits and physical activity based on anthropometric parameters control and gamification. The main contribution of this paper is a detailed specification of an integrated mobile for promoting healthy habits for young people. Additionally, it leverages the effects of the gamification and medical control on stimulating education with healthy habits. Even though other mobile applications have some features that the proposed application has, to the best of our knowledge, a standardized specification for the integration of activity recognition, healthy habits and food intake for teenagers lacks.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    A short review of the environmental impact of automated weed control
    (2017)
    ;
    ;
    ;
    Agricultural food production is in constant struggle to meet the market demands. Weed control is used to increase the per land unit production from agricultural field. The process of weed removal is usually performed manually and is a time-consuming and labor demanding task. Since mechanical removal is a difficult process, the plantations use herbicides to remove unwanted plants. Herbicides are applied in large quantities, thus often have a degenerative effect on the land. Sometimes, they even endanger the health of the workers who apply them and the end users which consume the harvested product. We review the technologies used for automated weed control and its environmental impact, specifically on the pollution reduction. We also review the herbicides reduction reported in implemented and tested approaches for precision agriculture with emphasis on the weed control environmental impact. Based on the reviewed papers, we conclude that automated weed detection can identify unwanted plants with decent accuracy. Consequently, this can facilitate building autonomous spraying systems that can significantly reduce the quantity of applied herbicides by precisely applying the chemicals only on the plants or mechanically removing unwanted plants. We also review the challenges that need to be overcome, such as precise weed plant type detection, speed of the process and some security considerations that arise from the involvement of information and communication technologies.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Cloud based Data Acquisition and Annotation Architecture for Weed Control
    (2018-04)
    ;
    ;
    ;
    In this paper we present a short evaluation of a cloud based architecture for data acquisition and annotation. We evaluate the implemented system for annotation and give initial results on the ability of the system to produce accurate labels on the data. The used data is consisted of plant field images. The users partially annotate the data and we use segmentation algorithms for enriching the annotation of the images. We compare three different segmentation algorithms used for the annotation. The results show that Grabcut algorithm is better than Watershed and nearest-neighbor approaches, but there is still room for improvement.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Proceedings of the “Think Tank Hackathon’’, Big Data Training School for Life Sciences Follow-up, Ljubljana 6th–7th February 2018
    (2018-04-19)
    K Schulze, Sabrina
    ;
    Ramšak, Živa
    ;
    Hoang, Yen
    ;
    ;
    Pfeil, Juliane
    On 6th and 7th February 2018 a Think Tank took place in Ljubljana, Slovenia. It was a follow-up of the “Big Data Training School for Life Sciences” held in Uppsala, Sweden, in September 2017. The focus was on identifying topics of interest and optimising the programme for a forthcoming “Advanced” Big Data Training School for Life Science, that we hope is again supported by the COST Action CHARME (Harmonising standardisation strategies to increase efficiency and competitiveness of European life-science research - CA15110). The Think Tank aimed to go into details of several topics that were - to a degree - covered by the former training school. Likewise, discussions embraced the recent experience of the attendees in light of the new knowledge obtained by the first edition of the training school and how it comes from the perspective of their current and upcoming work. The 2018 training school should strive for and further facilitate optimised applications of Big Data technologies in life sciences. The attendees of this hackathon entirely organised this workshop.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    The CHARME" Advanced Big Data Training School for Life Sciences": an example of good practices for training on current bioinformatics challenges
    (2019-02-05)
    Hoang, Yen
    ;
    Pfeil, Juliane
    ;
    Zagoršcak, Maja
    ;
    Y. A. Thieffry , Axel
    ;
    The CHARME “Advanced Big Data Training School for Life Sciences” took place during 3-7 September 2018, at the Campus Nord of the Technical University of Catalonia (UPC) in Barcelona (ES). The school was organised by the Data Management Group (DAMA) of the UPC in collaboration with EMBnet as a follow-up of the first CHARME-EMBnet “Big Data Training School for Life Sciences”, held in Uppsala, Sweden, in September 2017. The learning objectives of the school were defined and agreed during the CHARME “Think Tank Hackathon” that was held in Ljubljana, Slovenia, in February 2018. This article explains in detail the step forward organisation of the training school, the covered contents and the interaction/relationships that thanks to this school have been established between the trainees, the trainers and the organisers.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Skin lesion segmentation with deep learning
    (IEEE, 2019-07-01)
    ;
    Lameski, Jane
    ;
    Jovanov, Andrej
    ;
    ;
    Skin lesion segmentation is an important process in skin diagnostics because it improves manual and computeraided diagnostics by focusing the medical personnel on specific parts of the skin. Image segmentation is a common task in computer vision that partitions a digital image into multiple segments, for which deep neural networks have been proven to be reliable. In this paper, we investigate the applicability of deep learning methods for skin lesion segmentation evaluating three architectures: a pre-trained VGG16 encoder combined with SegNet decoder, TernausNet, and DeepLabV3+. The data set consists of images with RGB skin lesions and the ground truth of their segmentation. All the image sizes vary from hundreds to thousands of pixels per dimension. We evaluated the approaches with the Jaccard index and the computational efficiency of the training. The results show that the three deep neural network architectures achieve Jaccard Index scores of above 0.82, while the DeeplabV3+ outperforms the other approaches with a score of 0.876. The results are encouraging and can lead to fully-fledged automated approaches for skin lesion segmentation.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Scalable Cloud-based ETL for Self-serving Analytics
    (2019)
    ;
    Apanowicz, Cas
    ;
    Stencel, Krzysztof
    ;
    Slezak, Dominik
    Nowadays, companies must inevitably analyze the available data and extract meaningful knowledge. As an essential prerequisite, Extract-Transform-Load (ETL) requires significant effort, especially for Big Data. The existing solutions fail to formalize, integrate and evaluate the ETL process for Big Data in a scalable and cost-effective way. In this paper, we introduce a cloud-based architecture for data fusion and aggregation from a variety of sources. We identify three scenarios that generalize data aggregation during ETL. They are particularly valuable in the context of machine learning, as they facilitate feature engineering even in complex cases when the data from an extended time period has to be processed. In our experiments, we investigate user logs collected with Kinesis streams on Amazon AWS Hadoop clusters and demonstrate the scalability of our solution. The considered datasets range from 30 GB to 2.5 TB. The results were deployed in the domains, such as churn prediction, fraud detection, service outage prediction, and more generally – decision support and recommendation systems.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Towards Music Generation With Deep Learning Algorithms
    (2018)
    Docevski, Marko
    ;
    ;
    ;
    Computer music generation has application in many areas, including computer aided music composition, on demand music generation for video games, sport events, multi-media experiences, creating music in the style of passed away artists, etc. In this work we describe our approach towards music generation. We trained a deep learning model on a corpus of works of several authors. By priming the model with a snippet of an authors work we used it to create new music in their style. The dataset consists of music for guitar in midi format, containing only 1 part/instrument. We gathered more than 2000 files, of which we used from 5 to 300 per experiment. The data for the deep learning model is represented in piano roll format, a binary matrix where one axis represents the time and the other axis represents midi notes. Two deep learning architectures were evaluated, a 2-layer recurrent neural network of LSTM (Long Short Term Memory) cells and an Encoder-Decoder (Auto-Encoder) architecture for sequence learning, where both the encoder and decoder are built as recurrent layers of LSTM cells. The models were implemented in the Keras deep-learning library. The results were evaluated on a subjective basis, and with the evaluated datasets both architectures produced results of limited quality.