PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Conclusion & References

Nigeria News News

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Conclusion & References
Nigeria Latest News,Nigeria Headlines
  • 📰 hackernoon
  • ⏱ Reading Time:
  • 46 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 51%

This paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. Authors: Minghao Yan, University of Wisconsin-Madison; Hongyi Wang, Carnegie Mellon University; Shivaram Venkataraman, myan@cs.wisc.edu. Table of Links Abstract & Introduction Motivation Opportunities Architecture Overview Proble Formulation: Two-Phase Tuning Modeling Workload Interference Experiments Conclusion & References A. Hardware Details B. Experimental Results C. Arithmetic Intensity D.

Micro, 42:37–47, 2022. He, C., Li, S., So, J., Zeng, X., Zhang, M., Wang, H., Wang, X., Vepakomma, P., Singh, A., Qiu, H., et al. Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518, 2020. Hodak, M., Gorkovenko, M., and Dholakia, A. Towards power efficiency in deep learning on data center hardware. In

31st International Conference on computer design . Nvidia multi-instance gpu, 2023b. URL https://docs.nvidia.com/datacenter/ tesla/mig-user-guide/index.html. Peng, Y., Zhu, Y., Chen, Y., Bao, Y., Yi, B., Lan, C., Wu, C., and Guo, C. A generic communication scheduler for distributed dnn training acceleration. In SOSP, 2019. Qiao, A., Choe, S. K., Subramanya, S. J., Neiswanger, W., Ho, Q., Zhang, H., Ganger, G. R., and Xing, E. P. Pollux: Coadaptive cluster scheduling for goodput-optimized deep learning. In OSDI, 2021. Rajpurkar, P., Zhang, J.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

hackernoon /  🏆 532. in US

Nigeria Latest News, Nigeria Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Experimental ResultsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Experimental ResultsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: OpportunitiesPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: OpportunitiesThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: MotivationPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: MotivationThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic IntensityPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic IntensityThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: ExperimentsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: ExperimentsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Hardware DetailsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Hardware DetailsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »



Render Time: 2025-08-27 20:31:17