A typical workflow with the Neuron SDK will be to compile trained ML models on a compilation instance and then distribute the artifacts to a fleet of deployment instances, for execution. Neuron enables TensorFlow to be used for all of these steps.
1.1. Select an AMI of your choice.
Neuron is using standard package managers (apt, yum, pip, and conda) to install and keep updates current. Please refer to the applicable Linux section for detailed configuration steps.
Neuron supports Python versions 3.5, 3.6, and 3.7.
Refer to the AWS DLAMI Getting Started guide to learn how to use the DLAMI with Neuron. When first using a released DLAMI, there may be additional updates to the Neuron packages installed in it.
NOTE: Only DLAMI versions 26.0 and newer have Neuron support included.
1.2. Select and launch an EC2 instance of your choice to compile. Launch an instance by following EC2 instance launch instructions.
1.3. Select and launch a deployment (Inf1) instance of your choice.