Print Email Facebook Twitter Evaluating and Improving Large-Scale Machine Learning Frameworks Title Evaluating and Improving Large-Scale Machine Learning Frameworks Author Graur, Dan (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Rellermeyer, Jan S. (mentor) Alonso, Gustavo (mentor) Epema, Dick (graduation committee) Degree granting institution Delft University of Technology Date 2019-09-10 Abstract Given the increasing popularity of Machine Learning, and the ever increasing need to solve larger and more complex learning challenges, it is unsurprising that numerous distributed learning strategies have been brought forward in recent years, along with many large scale Machine Learning frameworks. It is however unclear how well these strategies perform across different cluster and batch sizes, or what their hardware demands are, as there is little research in the public domain on this matter. Identifying the weaknesses and limitations of the parameter update strategies is, however, essential towards increasing the efficiency of large scale Machine Learning and making it commonplace. This thesis seeks to find the answers to these aforementioned issues, and provide evidence of the strategies’ limitations and the root causes behind them. To make the study possible, the thesis looks into particular implementations of the strategies within the TensorFlow and Caffe2 frameworks. Subject Machine LearningDeep LearningTensorflowCaffe2Neural NetworksScalabilityBottleneck IdentificationBackpropagationLarge ScaleDistributed SystemsClustersNodesLimitationsPerformanceHardwareResNetClassificationImagesParameter Update To reference this document use: http://resolver.tudelft.nl/uuid:ea9d655d-eabd-4409-9d06-0e10dd7124ef Part of collection Student theses Document type master thesis Rights © 2019 Dan Graur Files PDF MSc_Thesis_Graur_Dan.pdf 86.64 MB Close viewer /islandora/object/uuid:ea9d655d-eabd-4409-9d06-0e10dd7124ef/datastream/OBJ/view