This paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. Authors: Minghao Yan, University of Wisconsin-Madison; Hongyi Wang, Carnegie Mellon University; Shivaram Venkataraman, myan@cs.wisc.edu. Table of Links Abstract & Introduction Motivation Opportunities Architecture Overview Proble Formulation: Two-Phase Tuning Modeling Workload Interference Experiments Conclusion & References A. Hardware Details B. Experimental Results C. Arithmetic Intensity D.
We then sum up each component’s power consumption to obtain the overall power consumption, before multiplying the power and the inference time to obtain the energy cost foreach inference request. To obtain a steady reading, we send 1000 inference requests for each hardware configuration for every model that we test. We cross-validate our measurements using a USB digital multimeter capable of transmitting data to computer software in real-time via Bluetooth.
Nigeria Latest News, Nigeria Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Experimental ResultsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: OpportunitiesThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: MotivationThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic IntensityThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: ExperimentsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »
ANYmal robot excels in parkour feats thanks to neural network trainingDog-like robot ANYmal's agility is boosted by a new framework, allowing it to tackle a basic parkour course at up to 6 feet per second.
Read more »