Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/34711
DC FieldValueLanguage
dc.contributor.authorKolovska, Anaen_US
dc.contributor.authorGusev, Marjanen_US
dc.contributor.authorMileski, Dimitaren_US
dc.date.accessioned2026-01-30T07:27:38Z-
dc.date.available2026-01-30T07:27:38Z-
dc.date.issued2025-11-25-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/34711-
dc.descriptionAccepted versionen_US
dc.description.abstractEnergy efficiency is a crucial challenge when deploying Large Language Models (LLMs). Electricity usage and related CO2 emissions can differ greatly depending on model architecture, parameter size, prompt length, and inference hardware. In this study, we evaluate 31 popular Ollama models across CPU and GPU inference, resulting in 60 testing scenarios. Energy and carbon metrics were gathered using the NVML and CodeCarbon libraries, providing insights into the environmental impact of LLM inference in data center settings.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectLLM , Ollama , Energy Efficiency , Carbon Footprint , Electricity Consumption , CPU Inference , GPU Inference , Data Centers , Environmental Impacten_US
dc.titleSmall Prompts, Big Energy and CO2 Impact: Benchmarking Ollama LLMs on CPU and GPUen_US
dc.typeProceeding articleen_US
dc.relation.conference2025 33rd Telecommunications Forum (TELFOR)en_US
dc.identifier.doi10.1109/telfor67910.2025.11314365-
dc.identifier.urlhttp://xplorestaging.ieee.org/ielx8/11314150/11314152/11314365.pdf?arnumber=11314365-
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:Faculty of Computer Science and Engineering: Conference papers
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.