Gerald Corzo 5/26/2020: Workshop Google Machine Learning Tools (Services) 1
Gerald Corzo 5/26/2020: Workshop Google Machine Learning Tools (Services) 1
Contents
Workshop Google Machine Learning tools (services) 1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Reference information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Google sites for ML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Other sources of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Appendix 8
Help on a function for Dense layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Motivation
In general there is a world of alternatives for Machine Learning and is not so easy to follow one path. New
libraries are appearing in the arena of ML, however, there is little or no legacy (compatibility) among them.
New learning algorithms are presented in environments with new ways to design model. This links also to
ways to connect models and data, and the facilities present to develop on the cloud. However, time finding
the “path” for solving a problem could be quite cumbersome and long.
So where can we start to learn? A suggestion is to follow these links.
1
Overview WORKSHOP
Gerald
GOOGLE
Corzo MACHINE LEARNING TOOLS (SERVICES)
Overview
This exercise is meant to cover a short description of the websites that provide data and Machine learning
tools as well as to introduce some of the main principles of using this tools.
This workshop does not aim at describing the algorithms as this was already covered in the lectures.
As an introduction to this workshop it is recommended to explore the following tutorial from zero to Hero
part 1.
https://www.youtube.com/watch?v=KNAWp2S3w94
** Tensorflow version** Tensorflow version 2.0 is relatively new, and links found on google could be pointing
to the old version.
For this Colab-Workshop we will have:
Part 1: Overview of sites and introduction to tensorflow Part 2: Exercise on reading data Part 3: Creating a
forecasting model using a ANN
Reference information
MIT has the following site with some good slides and links to examples:
https://cbmm.mit.edu/sites/default/files/documents/Tensorflow%202_0%20slides.pdf
https://machinelearningmastery.com/tensorflow-tutorial-deep-learning-with-tf-keras/
As part of the overview these sites will be described in the online session
Firebase - Mobile and cloud ML https://www.youtube.com/watch?v=ejrn_JHksws&feature=youtu.be&lis
t=PLl-K7zZEsYLmOF_07IayrTntevxtbUxDL
Earth Engine https://earthengine.google.com/
Kaggle (Google) https://www.kaggle.com/
Google AI cloud https://cloud.google.com/ai-platform
Kubeflow (ML on Kubernets) https://www.kubeflow.org/
https://ai.google/tools/
Courses
2
Other sources of Machine Learning Gerald Corzo PART 1 : TENSORFLOW BASICS
Microsoft
https://www.microsoft.com/en-us/ai/ai-platform
https://dotnet.microsoft.com/apps/machinelearning-ai
https://azure.microsoft.com/en-us/overview/ai-platform/
Interesting blog to read https://medium.com/@Mareks_082/ml-net-machine-learning-library-from-microsoft-
39d265761b34
Tensorflow library
Tensorflow is a library that is meant to facilitate the operations that are common in processing large amounts
of data. It was built on the concept of a tensor as a vector or array of multiple dimensions. Also, it can be
seen as a type of mapping flows of tensors in different type of parallel processes, optimizing the use of the
available infrastructures (most of the time proper for GPU type of operations). In this sense it is pretty fast,
but in general algorithms have to be built on top of tensorflow. Keras is the most common estimators library
that allows to use tensorflow types and integrates theirs processes into Machine Learning algorithms.
The figure . . . . . . shows the from bottom-up how the gray boxes which are representing the hardware
infrastructure used to process the machine learning algorithms are the base of the task. The tensorflow
3
Installation Gerald Corzo PART 2 : MACHINE LEARNING
infrastructure on top of it deal with a data and process such that can distribute in the “optimal” way the
tasks.
CPU = Central processing unit GPU = Graphical processing unit TPU = Tensor processing unit XLA =
eXtended Linear algebra units (integration of TPUs)
Installation
Tensorflow variables
Tensorflow relates highly to numpy, and also to other libraries like google earth. The variable for tensor flow
are created with the tensor flow function. Like the following example on how to create a constant.
import tensorflow as tf
# As an example of a constant
hello=tf.constant("My first tensor constant")
print(hello)
## A normal variable x is 1
print("A normal variable 9 is "+ str(y))
## A normal variable 9 is 10
x=tf.constant(1,name='x')
y=tf.Variable(x+9,name='y')
print(y)
Assume you have five input and output samples of an experiment. From a mathematical point of view you
could represent the data from the experiment as a vector of the values.
4
Keras ANN Gerald Corzo PART 2 : MACHINE LEARNING
Input
x = [x1 , x2 , x3 , x4 , x5 ]
output
y = [x1 , x2 , x3 , x4 , x5 ]
If you would like to know how the output relates to the input you could think about how to obtain a function
such that y = f (x). This could be solved with a standard mathematical analysis of the samples using a liner
regression fit.
However, the if you do not know the way to do the linear regression, how would you learn the relation between
the input vector x and the output vector y (firs in your mind).
Your task is : Guess the equation that relates x and y from the following example
Keras ANN
We will make a neural network that will help us to solve the above relation. A step by step description will
follow
Import libraries
## '2.2.0'
# Create the variables in numpy with size float to fit tensorflow requirements
xs=np.array([-1,0,1,2,3 ],dtype=float)
ys=np.array([-7,-2,3,8,13],dtype=float)
model=keras.Sequential(name="MyfirstModel")
model.add(keras.layers.Dense(units=1,input_shape=[1],name="MyLayer"))
model.compile(optimizer='sgd',loss='mean_squared_error')
model.fit(xs,ys,epochs=50) #,callbacks=[tensorflow_callback])
5
Keras ANN Gerald Corzo PART 2 : MACHINE LEARNING
Making a prediction
Model visualization
To view the contents of your model you can use get_weights and get_config
m=model.get_weights()
print(m)
This can be done by sing the function plot_model inside the utilities section of keras library, as follows:
tf.keras.utils.plot_model(
model,
to_file="model.png",
show_shapes=True,
show_layer_names=True,
6
Keras dense layer Gerald Corzo PART 2 : MACHINE LEARNING
rankdir="LR",
expand_nested=True,
dpi=96,
)
Figure 2: Model with one layer and one neuron, for one output
A more visual analysis of the model structure an its results can be obtained using the tensorboard.
%load_ext tensorboard
%tensorboard --logdir logs
A way to understand mode the keras.layer can be done by comparing the matric multiplication done in
numpy and the one in keras. See the following examples an try to analyze it in tensorflow.
7
Exercise 2 (Build your ANN) Gerald Corzo APPENDIX
x = np.arange(10).reshape(1, 5, 2)
print(x)
y = np.arange(10, 20).reshape(1, 2, 5)
print(y)
print("Matrix Multiplication")
z=np.matmul(y,x)
print(z)
x = np.arange(10).reshape(1, 5, 2)
print(x)
y = np.arange(10, 20).reshape(1, 2, 5)
print(y)
How many epochs do you need to build an ANN that replicate a sine function with 2 neurons.
1. Generate an input vector x of values from 0 to 6 with 0.1 intervals
2. Estimate the y = sin(x) function for all the values of the input vector x.
3. Build an ANN model that estimates y
Run the same example with only one neuron and discuss the result.
Appendix
help(tf.keras.layers.Dense)
8
Help on a function for Dense layer Gerald Corzo APPENDIX
9
Help on a function for Dense layer Gerald Corzo APPENDIX
## |
## | Method resolution order:
## | Dense
## | tensorflow.python.keras.engine.base_layer.Layer
## | tensorflow.python.module.module.Module
## | tensorflow.python.training.tracking.tracking.AutoTrackable
## | tensorflow.python.training.tracking.base.Trackable
## | tensorflow.python.keras.utils.version_utils.LayerVersionSelector
## | builtins.object
## |
## | Methods defined here:
## |
## | __init__(self, units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_i
## |
## | build(self, input_shape)
## | Creates the variables of the layer (optional, for subclass implementers).
## |
## | This is a method that implementers of subclasses of `Layer` or `Model`
## | can override if they need a state-creation step in-between
## | layer instantiation and layer call.
## |
## | This is typically used to create the weights of `Layer` subclasses.
## |
## | Arguments:
## | input_shape: Instance of `TensorShape`, or list of instances of
## | `TensorShape` if the layer expects a list of inputs
## | (one instance per input).
## |
## | call(self, inputs)
## | This is where the layer's logic lives.
## |
## | Arguments:
## | inputs: Input tensor, or list/tuple of input tensors.
## | **kwargs: Additional keyword arguments.
## |
## | Returns:
## | A tensor or list/tuple of tensors.
## |
## | compute_output_shape(self, input_shape)
## | Computes the output shape of the layer.
## |
## | If the layer has not been built, this method will call `build` on the
## | layer. This assumes that the layer will later be used with inputs that
## | match the input shape provided here.
## |
## | Arguments:
## | input_shape: Shape tuple (tuple of integers)
## | or list of shape tuples (one per output tensor of the layer).
## | Shape tuples can include None for free dimensions,
## | instead of an integer.
## |
## | Returns:
## | An input shape tuple.
## |
10
Help on a function for Dense layer Gerald Corzo APPENDIX
## | get_config(self)
## | Returns the config of the layer.
## |
## | A layer config is a Python dictionary (serializable)
## | containing the configuration of a layer.
## | The same layer can be reinstantiated later
## | (without its trained weights) from this configuration.
## |
## | The config of a layer does not include connectivity
## | information, nor the layer class name. These are handled
## | by `Network` (one layer of abstraction above).
## |
## | Returns:
## | Python dictionary.
## |
## | ----------------------------------------------------------------------
## | Methods inherited from tensorflow.python.keras.engine.base_layer.Layer:
## |
## | __call__(self, *args, **kwargs)
## | Wraps `call`, applying pre- and post-processing steps.
## |
## | Arguments:
## | *args: Positional arguments to be passed to `self.call`.
## | **kwargs: Keyword arguments to be passed to `self.call`.
## |
## | Returns:
## | Output tensor(s).
## |
## | Note:
## | - The following optional keyword arguments are reserved for specific uses:
## | * `training`: Boolean scalar tensor of Python boolean indicating
## | whether the `call` is meant for training or inference.
## | * `mask`: Boolean input mask.
## | - If the layer's `call` method takes a `mask` argument (as some Keras
## | layers do), its default value will be set to the mask generated
## | for `inputs` by the previous layer (if `input` did come from
## | a layer that generated a corresponding mask, i.e. if it came from
## | a Keras layer with masking support.
## |
## | Raises:
## | ValueError: if the layer's `call` method returns None (an invalid value).
## | RuntimeError: if `super().__init__()` was not called in the constructor.
## |
## | __delattr__(self, name)
## | Implement delattr(self, name).
## |
## | __getstate__(self)
## |
## | __setattr__(self, name, value)
## | Support self.foo = trackable syntax.
## |
## | __setstate__(self, state)
## |
## | add_loss(self, losses, inputs=None)
11
Help on a function for Dense layer Gerald Corzo APPENDIX
12
Help on a function for Dense layer Gerald Corzo APPENDIX
## |
## | Arguments:
## | losses: Loss tensor, or list/tuple of tensors. Rather than tensors, losses
## | may also be zero-argument callables which create a loss tensor.
## | inputs: Ignored when executing eagerly. If anything other than None is
## | passed, it signals the losses are conditional on some of the layer's
## | inputs, and thus they should only be run where these inputs are
## | available. This is the case for activity regularization losses, for
## | instance. If `None` is passed, the losses are assumed
## | to be unconditional, and will apply across all dataflows of the layer
## | (e.g. weight regularization losses).
## |
## | add_metric(self, value, aggregation=None, name=None)
## | Adds metric tensor to the layer.
## |
## | Args:
## | value: Metric tensor.
## | aggregation: Sample-wise metric reduction function. If `aggregation=None`,
## | it indicates that the metric tensor provided has been aggregated
## | already. eg, `bin_acc = BinaryAccuracy(name='acc')` followed by
## | `model.add_metric(bin_acc(y_true, y_pred))`. If aggregation='mean', the
## | given metric tensor will be sample-wise reduced using `mean` function.
## | eg, `model.add_metric(tf.reduce_sum(outputs), name='output_mean',
## | aggregation='mean')`.
## | name: String metric name.
## |
## | Raises:
## | ValueError: If `aggregation` is anything other than None or `mean`.
## |
## | add_update(self, updates, inputs=None)
## | Add update op(s), potentially dependent on layer inputs. (deprecated arguments)
## |
## | Warning: SOME ARGUMENTS ARE DEPRECATED: `(inputs)`. They will be removed in a future version.
## | Instructions for updating:
## | `inputs` is now automatically inferred
## |
## | Weight updates (for instance, the updates of the moving mean and variance
## | in a BatchNormalization layer) may be dependent on the inputs passed
## | when calling a layer. Hence, when reusing the same layer on
## | different inputs `a` and `b`, some entries in `layer.updates` may be
## | dependent on `a` and some on `b`. This method automatically keeps track
## | of dependencies.
## |
## | The `get_updates_for` method allows to retrieve the updates relevant to a
## | specific set of inputs.
## |
## | This call is ignored when eager execution is enabled (in that case, variable
## | updates are run on the fly and thus do not need to be tracked for later
## | execution).
## |
## | Arguments:
## | updates: Update op, or list/tuple of update ops, or zero-arg callable
## | that returns an update op. A zero-arg callable should be passed in
## | order to disable running the updates by setting `trainable=False`
13
Help on a function for Dense layer Gerald Corzo APPENDIX
14
Help on a function for Dense layer Gerald Corzo APPENDIX
15
Help on a function for Dense layer Gerald Corzo APPENDIX
## |
## | get_input_at(self, node_index)
## | Retrieves the input tensor(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A tensor (or list of tensors if the layer has multiple inputs).
## |
## | Raises:
## | RuntimeError: If called in Eager mode.
## |
## | get_input_mask_at(self, node_index)
## | Retrieves the input mask tensor(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A mask tensor
## | (or list of tensors if the layer has multiple inputs).
## |
## | get_input_shape_at(self, node_index)
## | Retrieves the input shape(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A shape tuple
## | (or list of shape tuples if the layer has multiple inputs).
## |
## | Raises:
## | RuntimeError: If called in Eager mode.
## |
## | get_losses_for(self, inputs)
## | Retrieves losses relevant to a specific set of inputs.
## |
## | Arguments:
## | inputs: Input tensor or list/tuple of input tensors.
## |
## | Returns:
## | List of loss tensors of the layer that depend on `inputs`.
## |
16
Help on a function for Dense layer Gerald Corzo APPENDIX
## | get_output_at(self, node_index)
## | Retrieves the output tensor(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A tensor (or list of tensors if the layer has multiple outputs).
## |
## | Raises:
## | RuntimeError: If called in Eager mode.
## |
## | get_output_mask_at(self, node_index)
## | Retrieves the output mask tensor(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A mask tensor
## | (or list of tensors if the layer has multiple outputs).
## |
## | get_output_shape_at(self, node_index)
## | Retrieves the output shape(s) of a layer at a given node.
## |
## | Arguments:
## | node_index: Integer, index of the node
## | from which to retrieve the attribute.
## | E.g. `node_index=0` will correspond to the
## | first time the layer was called.
## |
## | Returns:
## | A shape tuple
## | (or list of shape tuples if the layer has multiple outputs).
## |
## | Raises:
## | RuntimeError: If called in Eager mode.
## |
## | get_updates_for(self, inputs)
## | Retrieves updates relevant to a specific set of inputs.
## |
## | Arguments:
## | inputs: Input tensor or list/tuple of input tensors.
## |
## | Returns:
## | List of update ops of the layer that depend on `inputs`.
## |
## | get_weights(self)
17
Help on a function for Dense layer Gerald Corzo APPENDIX
18
Help on a function for Dense layer Gerald Corzo APPENDIX
## | >>> b = tf.keras.layers.Dense(1,
## | ... kernel_initializer=tf.constant_initializer(2.))
## | >>> b_out = b(tf.convert_to_tensor([[10., 20., 30.]]))
## | >>> b.get_weights()
## | [array([[2.],
## | [2.],
## | [2.]], dtype=float32), array([0.], dtype=float32)]
## | >>> b.set_weights(a.get_weights())
## | >>> b.get_weights()
## | [array([[1.],
## | [1.],
## | [1.]], dtype=float32), array([0.], dtype=float32)]
## |
## | Arguments:
## | weights: a list of Numpy arrays. The number
## | of arrays and their shape must match
## | number of the dimensions of the weights
## | of the layer (i.e. it should match the
## | output of `get_weights`).
## |
## | Raises:
## | ValueError: If the provided weights list does not match the
## | layer's specifications.
## |
## | ----------------------------------------------------------------------
## | Class methods inherited from tensorflow.python.keras.engine.base_layer.Layer:
## |
## | from_config(config) from builtins.type
## | Creates a layer from its config.
## |
## | This method is the reverse of `get_config`,
## | capable of instantiating the same layer from the config
## | dictionary. It does not handle layer connectivity
## | (handled by Network), nor weights (handled by `set_weights`).
## |
## | Arguments:
## | config: A Python dictionary, typically the
## | output of get_config.
## |
## | Returns:
## | A layer instance.
## |
## | ----------------------------------------------------------------------
## | Data descriptors inherited from tensorflow.python.keras.engine.base_layer.Layer:
## |
## | activity_regularizer
## | Optional regularizer function for the output of this layer.
## |
## | dtype
## | Dtype used by the weights of the layer, set in the constructor.
## |
## | dynamic
## | Whether the layer is dynamic (eager-only); set in the constructor.
## |
19
Help on a function for Dense layer Gerald Corzo APPENDIX
## | inbound_nodes
## | Deprecated, do NOT use! Only for compatibility with external Keras.
## |
## | input
## | Retrieves the input tensor(s) of a layer.
## |
## | Only applicable if the layer has exactly one input,
## | i.e. if it is connected to one incoming layer.
## |
## | Returns:
## | Input tensor or list of input tensors.
## |
## | Raises:
## | RuntimeError: If called in Eager mode.
## | AttributeError: If no inbound nodes are found.
## |
## | input_mask
## | Retrieves the input mask tensor(s) of a layer.
## |
## | Only applicable if the layer has exactly one inbound node,
## | i.e. if it is connected to one incoming layer.
## |
## | Returns:
## | Input mask tensor (potentially None) or list of input
## | mask tensors.
## |
## | Raises:
## | AttributeError: if the layer is connected to
## | more than one incoming layers.
## |
## | input_shape
## | Retrieves the input shape(s) of a layer.
## |
## | Only applicable if the layer has exactly one input,
## | i.e. if it is connected to one incoming layer, or if all inputs
## | have the same shape.
## |
## | Returns:
## | Input shape, as an integer shape tuple
## | (or list of shape tuples, one tuple per input tensor).
## |
## | Raises:
## | AttributeError: if the layer has no defined input_shape.
## | RuntimeError: if called in Eager mode.
## |
## | input_spec
## | `InputSpec` instance(s) describing the input format for this layer.
## |
## | When you create a layer subclass, you can set `self.input_spec` to enable
## | the layer to run input compatibility checks when it is called.
## | Consider a `Conv2D` layer: it can only be called on a single input tensor
## | of rank 4. As such, you can set, in `__init__()`:
## |
## | ```python
20
Help on a function for Dense layer Gerald Corzo APPENDIX
## | self.input_spec = tf.keras.layers.InputSpec(ndim=4)
## | ```
## |
## | Now, if you try to call the layer on an input that isn't rank 4
## | (for instance, an input of shape `(2,)`, it will raise a nicely-formatted
## | error:
## |
## | ```
## | ValueError: Input 0 of layer conv2d is incompatible with the layer:
## | expected ndim=4, found ndim=1. Full shape received: [2]
## | ```
## |
## | Input checks that can be specified via `input_spec` include:
## | - Structure (e.g. a single input, a list of 2 inputs, etc)
## | - Shape
## | - Rank (ndim)
## | - Dtype
## |
## | For more information, see `tf.keras.layers.InputSpec`.
## |
## | Returns:
## | A `tf.keras.layers.InputSpec` instance, or nested structure thereof.
## |
## | losses
## | Losses which are associated with this `Layer`.
## |
## | Variable regularization tensors are created when this property is accessed,
## | so it is eager safe: accessing `losses` under a `tf.GradientTape` will
## | propagate gradients back to the corresponding variables.
## |
## | Returns:
## | A list of tensors.
## |
## | metrics
## | List of `tf.keras.metrics.Metric` instances tracked by the layer.
## |
## | name
## | Name of the layer (string), set in the constructor.
## |
## | non_trainable_variables
## |
## | non_trainable_weights
## | List of all non-trainable weights tracked by this layer.
## |
## | Non-trainable weights are *not* updated during training. They are expected
## | to be updated manually in `call()`.
## |
## | Returns:
## | A list of non-trainable variables.
## |
## | outbound_nodes
## | Deprecated, do NOT use! Only for compatibility with external Keras.
## |
## | output
21
Help on a function for Dense layer Gerald Corzo APPENDIX
22
Help on a function for Dense layer Gerald Corzo APPENDIX
23
Help on a function for Dense layer Gerald Corzo APPENDIX
## | The original method wrapped such that it enters the module's name scope.
## |
## | ----------------------------------------------------------------------
## | Data descriptors inherited from tensorflow.python.module.module.Module:
## |
## | name_scope
## | Returns a `tf.name_scope` instance for this class.
## |
## | submodules
## | Sequence of all sub-modules.
## |
## | Submodules are modules which are properties of this module, or found as
## | properties of modules which are properties of this module (and so on).
## |
## | >>> a = tf.Module()
## | >>> b = tf.Module()
## | >>> c = tf.Module()
## | >>> a.b = b
## | >>> b.c = c
## | >>> list(a.submodules) == [b, c]
## | True
## | >>> list(b.submodules) == [c]
## | True
## | >>> list(c.submodules) == []
## | True
## |
## | Returns:
## | A sequence of all submodules.
## |
## | ----------------------------------------------------------------------
## | Data descriptors inherited from tensorflow.python.training.tracking.base.Trackable:
## |
## | __dict__
## | dictionary for instance variables (if defined)
## |
## | __weakref__
## | list of weak references to the object (if defined)
## |
## | ----------------------------------------------------------------------
## | Static methods inherited from tensorflow.python.keras.utils.version_utils.LayerVersionSelector:
## |
## | __new__(cls, *args, **kwargs)
## | Create and return a new object. See help(type) for accurate signature.
24