Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/23167
DC FieldValueLanguage
dc.contributor.authorJevremovic, Aleksandaren_US
dc.contributor.authorVeinovic, Mladenen_US
dc.contributor.authorCabarkapa, Milanen_US
dc.contributor.authorKrstic, Markoen_US
dc.contributor.authorChorbev, Ivanen_US
dc.contributor.authorDimitrovski, Ivicaen_US
dc.contributor.authorGarcia, Nunoen_US
dc.contributor.authorPombo, Nunoen_US
dc.contributor.authorStojmenovic, Milosen_US
dc.date.accessioned2022-09-28T12:44:15Z-
dc.date.available2022-09-28T12:44:15Z-
dc.date.issued2021-09-20-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/23167-
dc.description.abstractIt is every parent’s wish to protect their children from online pornography, cyber bullying and cyber predators. Several existing approaches analyze a limited amount of information stemming from the interactions of the child with the corresponding online party. Some restrict access to websites based on a blacklist of known forbidden URLs, others attempt to parse and analyze the exchanged multimedia content between the two parties. However, new URLs can be used to circumvent a blacklist, and images, video, and text can individually appear to be safe, but need to be judged jointly. We propose a highly modular framework of analyzing content in its final form at the user interface, or Human Computer Interaction (HCI) layer, as it appears before the child: on the screen and through the speakers. Our approach is to produce Children’s Agents for Secure and Privacy Enhanced Reaction (CASPER), which analyzes screen captures and audio signals in real time in order to make a decision based on all of the information at its disposal, with limited hardware capabilities. We employ a collection of deep learning techniques for image, audio and text processing in order to categorize visual content as pornographic or neutral, and textual content as cyberbullying or neutral. We additionally contribute a custom dataset that offers a wide spectrum of objectionable content for evaluation and training purposes. CASPER demonstrates an average accuracy of 88% and an F1 score of 0.85 when classifying text, and an accuracy of 95% when classifying pornography.en_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Accessen_US
dc.subjectCyber-bullying, cyber-grooming, online safety, pornography filter, real time agenten_US
dc.titleKeeping Children Safe online with Limited Resources: Analyzing what is seen and hearden_US
dc.typeJournal Articleen_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
crisitem.author.deptFaculty of Computer Science and Engineering-
Appears in Collections:Faculty of Computer Science and Engineering: Journal Articles
Files in This Item:
File Description SizeFormat 
Keeping_Children_Safe_Online_With_Limited_Resources_Analyzing_What_is_Seen_and_Heard.pdf1.03 MBAdobe PDFView/Open
Show simple item record

Page view(s)

38
checked on Apr 28, 2024

Download(s)

38
checked on Apr 28, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.