Now showing 1 - 7 of 7
  • Some of the metrics are blocked by your 
    Item type:Publication,
    An Overview of GraphQL: Core Features and Architecture
    (ICT Innovations 2020, 2020-09)
    ;
    ;
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Few-shot remote sensing image scene classification with CLIP and prompt learning
    (Springer Science and Business Media LLC, 2025-12-23)
    ;
    ;
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data
    (2024-10-01)
    ;
    ;
    Accurate semantic segmentation of remote sensing imagery is critical for various Earth observation applications, such as land cover mapping, urban planning, and environmental monitoring. However, individual data sources often present limitations for this task. Very High Resolution (VHR) aerial imagery provides rich spatial details but cannot capture temporal information about land cover changes. Conversely, Satellite Image Time Series (SITS) capture temporal dynamics, such as seasonal variations in vegetation, but with limited spatial resolution, making it difficult to distinguish fine-scale objects. This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation that leverages the complementary strengths of both VHR aerial imagery and SITS. The proposed model consists of two independent deep learning branches. One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (MaxViT) backbone. The other branch captures complex spatio-temporal dynamics from the Sentinel-2 satellite image time series using a U-Net with Temporal Attention Encoder (U-TAE). This approach leads to state-of-the-art results on the FLAIR dataset, a large-scale benchmark for land cover segmentation using multi-source optical imagery. The findings highlight the importance of multi-modality fusion in improving the accuracy and robustness of semantic segmentation in remote sensing applications.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Few-Shot Semantic Segmentation in Remote Sensing: A Review on Definitions, Methods, Datasets, Advances and Future Trends
    (MDPI AG, 2026-02-18)
    Petrov, Marko
    ;
    Pandilova, Ema
    ;
    ;
    ;
    Semantic segmentation in remote sensing images, which is the task of classifying each pixel of the image in a specific category, is widely used in areas such as disaster management, environmental monitoring, precision agriculture, and many others. However, traditional semantic segmentation methods face a major challenge: they require large amounts of annotated data to train effectively. To tackle this challenge, few-shot semantic segmentation has been introduced, where the models can learn and adapt quickly to new classes from just a few annotated samples. This paper presents a comprehensive review of recent advances in few-shot semantic segmentation (FSSS) for remote sensing, covering datasets, methods, and emerging research directions. We first outline the fundamental principles of few-shot learning and summarize commonly used remote-sensing benchmarks, emphasizing their scale, geographic diversity, and relevance to episodic evaluation. Next, we categorize FSSS methods into major families (meta-learning, conditioning-based, and foundation-assisted approaches) and analyze how architectural choices, pretraining strategies, and inference protocols influence performance. The discussion highlights empirical trends across datasets, the behavior of different conditioning mechanisms, the impact of self-supervised and multimodal pretraining, and the role of reproducibility and evaluation design. Finally, we identify key challenges and future trends, including benchmark standardization, integration with foundation and multimodal models, efficiency at scale, and uncertainty-aware adaptation. Collectively, they signal a shift toward unified, adaptive models capable of segmenting novel classes across sensors, regions, and temporal domains with minimal supervision.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    U-Net Ensemble for Enhanced Semantic Segmentation in Remote Sensing Imagery
    (MDPI, 2024-06-08)
    ;
    ;
    ;
    Semantic segmentation of remote sensing imagery stands as a fundamental task within the domains of both remote sensing and computer vision. Its objective is to generate a comprehensive pixel-wise segmentation map of an image, assigning a specific label to each pixel. This facilitates in-depth analysis and comprehension of the Earth’s surface. In this paper, we propose an approach for enhancing semantic segmentation performance by employing an ensemble of U-Net models with three different backbone networks: Multi-Axis Vision Transformer, ConvFormer, and EfficientNet. The final segmentation maps are generated through a geometric mean ensemble method, leveraging the diverse representations learned by each backbone network. The effectiveness of the base U-Net models and the proposed ensemble is evaluated on multiple datasets commonly used for semantic segmentation tasks in remote sensing imagery, including LandCover.ai, LoveDA, INRIA, UAVid, and ISPRS Potsdam datasets. Our experimental results demonstrate that the proposed approach achieves state-of-the-art performance, showcasing its effectiveness and robustness in accurately capturing the semantic information embedded within remote sensing images.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Check for Semantic Segmentation of Remote Sensing Images: Definition, Methods, Datasets and Applications
    (Springer Nature, 2024-02-26)
    ;
    ;
    ;
    Semantic segmentation of remote sensing images is a vital task in the field of remote sensing and computer vision. The goal is to produce a dense pixel-wise segmentation map of an image, where a specific class is assigned to each pixel, enabling detailed analysis and understanding of the Earth's surface. This paper provides an overview of semantic segmentation in remote sensing, starting with a definition of the task and its significance in extracting valuable information from remote sensing imagery. Various methods used for semantic segmentation in remote sensing are discussed, including traditional approaches such as region-based and pixel-based methods, as well as more recent deep learning-based techniques. Next, the paper delves into the available datasets for semantic segmentation of remote sensing images. Many available datasets are reviewed, highlighting their characteristics, including the number of images, image size, number of labels, spatial resolution, format and spectral bands. These datasets serve as valuable resources for training, evaluating, and benchmarking semantic segmentation algorithms in remote sensing applications. Furthermore, the paper highlights the broad range of applications enabled by semantic segmentation in remote sensing, including urban planning, land cover mapping, disaster management, environmental monitoring, and precision agriculture. Overall, this paper serves as a comprehensive guide to semantic segmentation of remote sensing images, providing insights into its definition, methods, available datasets and wide-ranging applications.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Semantic Segmentation of Unmanned Aerial Vehicle Remote Sensing Images using SegFormer
    (2024-10-01)
    ;
    ;
    ;
    The escalating use of Unmanned Aerial Vehicles (UAVs) as remote sensing platforms has garnered considerable attention, proving invaluable for ground object recognition. While satellite remote sensing images face limitations in resolution and weather susceptibility, UAV remote sensing, em ploying low-speed unmanned aircraft, offers enhanced object resolution and agility. The advent of advanced machine learning techniques has propelled significant strides in image analysis, particularly in semantic segmentation for UAV remote sensing images. This paper evaluates the effectiveness and efficiency of SegFormer, a semantic segmentation framework, for the semantic segmentation of UAV images. SegFormer variants, ranging from real-time (B0) to high-performance (B5) models, are assessed using the UAVid dataset tailored for semantic segmentation tasks. The research details the architecture and training procedures specific to SegFormer in the context of UAV semantic segmenta tion. Experimental results showcase the model’s performance on benchmark dataset, highlighting its ability to accurately delineate objects and land cover features in diverse UAV scenarios, leading to both high efficiency and performance.