Clonamos El Repositorio para Obtener Los Dataset: From Import
Clonamos El Repositorio para Obtener Los Dataset: From Import
dataSet
!git clone https://github.com/joanby/deeplearning-az.git
Test it
!ls '/content/drive/My Drive'
Instalar dependendias
!pip install sklearn
Instalar Theano
!pip install --upgrade --no-deps
git+git://github.com/Theano/Theano.git
Collecting git+git://github.com/Theano/Theano.git
Cloning git://github.com/Theano/Theano.git to /tmp/pip-req-build-
7q89qzfc
Running command git clone -q git://github.com/Theano/Theano.git
/tmp/pip-req-build-7q89qzfc
Building wheels for collected packages: Theano
Building wheel for Theano (setup.py) ... e=Theano-
1.0.5+1.geb6a4125c-cp36-none-any.whl size=2668281
sha256=4d4c9648f72d9a1cfd6b33dea50b7685d703ab52da5c3aa3fe6ecd7dcad0604
8
Stored in directory:
/tmp/pip-ephem-wheel-cache-_6l9lz2i/wheels/ae/32/7c/62beb8371953eb20c2
71b3bac7d0e56e1a2020d46994346b52
Successfully built Theano
Installing collected packages: Theano
Found existing installation: Theano 1.0.5
Uninstalling Theano-1.0.5:
Successfully uninstalled Theano-1.0.5
Successfully installed Theano-1.0.5+1.geb6a4125c
Current values:
ServerApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
ServerApp.rate_limit_window=3.0 (secs)
dataset
EstimatedSalary Exited
0 101348.88 1
1 112542.58 0
2 113931.57 1
3 93826.63 0
4 79084.10 0
... ... ...
9995 96270.64 0
9996 101699.77 0
9997 42085.58 1
9998 92888.52 1
9999 38190.78 0
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = ColumnTransformer(
[('one_hot_encoder', OneHotEncoder(categories='auto'), [1])],
remainder='passthrough'
)
X = onehotencoder.fit_transform(X)
X = X[:, 1:]
Escalado de variables
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
X_train.shape()
----------------------------------------------------------------------
-----
ModuleNotFoundError Traceback (most recent call
last)
Input In [18], in <cell line: 1>()
----> 1 import keras
2 from keras.models import Sequential
3 from keras.layers import Dense
Inicializar la RNA
classifier = Sequential()
Compilar la RNA
classifier.compile(optimizer = "adam", loss = "binary_crossentropy",
metrics = ["accuracy"])
Epoch 1/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4829
- accuracy: 0.7960
Epoch 2/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4262
- accuracy: 0.7960
Epoch 3/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4210
- accuracy: 0.8039
Epoch 4/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4173
- accuracy: 0.8249
Epoch 5/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4160
- accuracy: 0.8278
Epoch 6/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4140
- accuracy: 0.8313
Epoch 7/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4128
- accuracy: 0.8316
Epoch 8/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4114
- accuracy: 0.8320
Epoch 9/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4100
- accuracy: 0.8326
Epoch 10/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4090
- accuracy: 0.8339
Epoch 11/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4083
- accuracy: 0.8328
Epoch 12/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4070
- accuracy: 0.8335
Epoch 13/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4066
- accuracy: 0.8321
Epoch 14/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4062
- accuracy: 0.8340
Epoch 15/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4058
- accuracy: 0.8347
Epoch 16/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4049
- accuracy: 0.8332
Epoch 17/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4048
- accuracy: 0.8345
Epoch 18/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4048
- accuracy: 0.8336
Epoch 19/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4042
- accuracy: 0.8342
Epoch 20/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4040
- accuracy: 0.8355
Epoch 21/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4040
- accuracy: 0.8353
Epoch 22/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4035
- accuracy: 0.8346
Epoch 23/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4033
- accuracy: 0.8347
Epoch 24/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4035
- accuracy: 0.8335
Epoch 25/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4027
- accuracy: 0.8351
Epoch 26/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4032
- accuracy: 0.8339
Epoch 27/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4021
- accuracy: 0.8356
Epoch 28/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4021
- accuracy: 0.8340
Epoch 29/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4022
- accuracy: 0.8347
Epoch 30/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4021
- accuracy: 0.8340
Epoch 31/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4019
- accuracy: 0.8356
Epoch 32/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4019
- accuracy: 0.8366
Epoch 33/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4021
- accuracy: 0.8351
Epoch 34/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4017
- accuracy: 0.8347
Epoch 35/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4017
- accuracy: 0.8346
Epoch 36/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4017
- accuracy: 0.8361
Epoch 37/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4011
- accuracy: 0.8367
Epoch 38/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4016
- accuracy: 0.8359
Epoch 39/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4013
- accuracy: 0.8340
Epoch 40/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4009
- accuracy: 0.8340
Epoch 41/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4016
- accuracy: 0.8363
Epoch 42/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4012
- accuracy: 0.8344
Epoch 43/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4008
- accuracy: 0.8351
Epoch 44/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4006
- accuracy: 0.8342
Epoch 45/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4011
- accuracy: 0.8346
Epoch 46/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4009
- accuracy: 0.8361
Epoch 47/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4010
- accuracy: 0.8347
Epoch 48/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4005
- accuracy: 0.8357
Epoch 49/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4009
- accuracy: 0.8349
Epoch 50/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4006
- accuracy: 0.8375
Epoch 51/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4009
- accuracy: 0.8344
Epoch 52/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4002
- accuracy: 0.8355
Epoch 53/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4002
- accuracy: 0.8356
Epoch 54/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4007
- accuracy: 0.8339
Epoch 55/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4005
- accuracy: 0.8345
Epoch 56/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4006
- accuracy: 0.8359
Epoch 57/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4006
- accuracy: 0.8357
Epoch 58/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8350
Epoch 59/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8365
Epoch 60/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4001
- accuracy: 0.8367
Epoch 61/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4003
- accuracy: 0.8338
Epoch 62/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4002
- accuracy: 0.8361
Epoch 63/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8344
Epoch 64/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4004
- accuracy: 0.8366
Epoch 65/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3997
- accuracy: 0.8365
Epoch 66/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4006
- accuracy: 0.8360
Epoch 67/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4006
- accuracy: 0.8351
Epoch 68/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8355
Epoch 69/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4000
- accuracy: 0.8353
Epoch 70/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4007
- accuracy: 0.8351
Epoch 71/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3997
- accuracy: 0.8334
Epoch 72/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4007
- accuracy: 0.8360
Epoch 73/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8355
Epoch 74/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3997
- accuracy: 0.8346
Epoch 75/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4004
- accuracy: 0.8366
Epoch 76/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8355
Epoch 77/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4001
- accuracy: 0.8357
Epoch 78/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4005
- accuracy: 0.8353
Epoch 79/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3999
- accuracy: 0.8346
Epoch 80/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4005
- accuracy: 0.8359
Epoch 81/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4005
- accuracy: 0.8360
Epoch 82/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8353
Epoch 83/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3998
- accuracy: 0.8359
Epoch 84/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4001
- accuracy: 0.8349
Epoch 85/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4001
- accuracy: 0.8361
Epoch 86/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3999
- accuracy: 0.8357
Epoch 87/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4001
- accuracy: 0.8357
Epoch 88/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4002
- accuracy: 0.8356
Epoch 89/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4004
- accuracy: 0.8354
Epoch 90/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3999
- accuracy: 0.8364
Epoch 91/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4003
- accuracy: 0.8346
Epoch 92/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4000
- accuracy: 0.8365
Epoch 93/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3998
- accuracy: 0.8364
Epoch 94/100
800/800 [==============================] - 1s 2ms/step - loss: 0.4002
- accuracy: 0.8349
Epoch 95/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3996
- accuracy: 0.8381
Epoch 96/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3997
- accuracy: 0.8340
Epoch 97/100
800/800 [==============================] - 1s 2ms/step - loss: 0.3998
- accuracy: 0.8356
Epoch 98/100
800/800 [==============================] - 2s 2ms/step - loss: 0.3995
- accuracy: 0.8353
Epoch 99/100
800/800 [==============================] - 2s 2ms/step - loss: 0.4003
- accuracy: 0.8359
Epoch 100/100
800/800 [==============================] - 2s 2ms/step - loss: 0.3999
- accuracy: 0.8341
<tensorflow.python.keras.callbacks.History at 0x7f0eb0105630>
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = "uniform",
activation = "relu", input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = "uniform",
activation = "relu"))
classifier.add(Dense(units = 1, kernel_initializer = "uniform",
activation = "sigmoid"))
classifier.compile(optimizer = "adam", loss = "binary_crossentropy",
metrics = ["accuracy"])
return classifier
mean = accuracies.mean()
variance = accuracies.std()
Mejorar la RNA
Regularización de Dropout para evitar el overfitting
Ajustar la RNA
from sklearn.model_selection import GridSearchCV
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = "uniform",
activation = "relu", input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = "uniform",
activation = "relu"))
classifier.add(Dense(units = 1, kernel_initializer = "uniform",
activation = "sigmoid"))
classifier.compile(optimizer = optimizer, loss =
"binary_crossentropy", metrics = ["accuracy"])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {
'batch_size' : [25,32],
'nb_epoch' : [100, 500],
'optimizer' : ['adam', 'rmsprop']
}
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
----------------------------------------------------------------------
-----
KeyboardInterrupt Traceback (most recent call
last)
<ipython-input-39-f2d4fd603a69> in <module>()
3 scoring = 'accuracy',
4 cv = 10)
----> 5 grid_search = grid_search.fit(X_train, y_train)
6
7 best_parameters = grid_search.best_params_
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search
.py in fit(self, X, y, groups, **fit_params)
708 return results
709
--> 710 self._run_search(evaluate_candidates)
711
712 # For multi-metric evaluation, store the best_index_,
best_params_ and
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search
.py in _run_search(self, evaluate_candidates)
1149 def _run_search(self, evaluate_candidates):
1150 """Search all candidates in param_grid"""
-> 1151 evaluate_candidates(ParameterGrid(self.param_grid))
1152
1153
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search
.py in evaluate_candidates(candidate_params)
687 for parameters, (train, test)
688 in product(candidate_params,
--> 689 cv.split(X, y,
groups)))
690
691 if len(out) < 1:
/usr/local/lib/python3.6/dist-packages/joblib/parallel.py in
__call__(self, iterable)
1030 self._iterating = self._original_iterator is
not None
1031
-> 1032 while self.dispatch_one_batch(iterator):
1033 pass
1034
/usr/local/lib/python3.6/dist-packages/joblib/parallel.py in
dispatch_one_batch(self, iterator)
845 return False
846 else:
--> 847 self._dispatch(tasks)
848 return True
849
/usr/local/lib/python3.6/dist-packages/joblib/parallel.py in
_dispatch(self, batch)
763 with self._lock:
764 job_idx = len(self._jobs)
--> 765 job = self._backend.apply_async(batch,
callback=cb)
766 # A job can complete so quickly than its callback
is
767 # called before we get here, causing self._jobs to
/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py in
apply_async(self, func, callback)
206 def apply_async(self, func, callback=None):
207 """Schedule a func to be run"""
--> 208 result = ImmediateResult(func)
209 if callback:
210 callback(result)
/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py in
__init__(self, batch)
570 # Don't delay the application, to avoid keeping the
input
571 # arguments in memory
--> 572 self.results = batch()
573
574 def get(self):
/usr/local/lib/python3.6/dist-packages/joblib/parallel.py in
__call__(self)
251 with parallel_backend(self._backend,
n_jobs=self._n_jobs):
252 return [func(*args, **kwargs)
--> 253 for func, args, kwargs in self.items]
254
255 def __reduce__(self):
/usr/local/lib/python3.6/dist-packages/joblib/parallel.py in
<listcomp>(.0)
251 with parallel_backend(self._backend,
n_jobs=self._n_jobs):
252 return [func(*args, **kwargs)
--> 253 for func, args, kwargs in self.items]
254
255 def __reduce__(self):
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_valida
tion.py in _fit_and_score(estimator, X, y, scorer, train, test,
verbose, parameters, fit_params, return_train_score,
return_parameters, return_n_test_samples, return_times,
return_estimator, error_score)
513 estimator.fit(X_train, **fit_params)
514 else:
--> 515 estimator.fit(X_train, y_train, **fit_params)
516
517 except Exception as e:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/wrapper
s/scikit_learn.py in fit(self, x, y, **kwargs)
221 raise ValueError('Invalid shape for y: ' + str(y.shape))
222 self.n_classes_ = len(self.classes_)
--> 223 return super(KerasClassifier, self).fit(x, y, **kwargs)
224
225 def predict(self, x, **kwargs):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/wrapper
s/scikit_learn.py in fit(self, x, y, **kwargs)
164 fit_args.update(kwargs)
165
--> 166 history = self.model.fit(x, y, **fit_args)
167
168 return history
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/
training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint:
disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/
training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks,
validation_split, validation_data, shuffle, class_weight,
sample_weight, initial_epoch, steps_per_epoch, validation_steps,
validation_batch_size, validation_freq, max_queue_size, workers,
use_multiprocessing)
1096 batch_size=batch_size):
1097 callbacks.on_train_batch_begin(step)
-> 1098 tmp_logs = train_function(iterator)
1099 if data_handler.should_sync:
1100 context.async_wait()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_fun
ction.py in __call__(self, *args, **kwds)
778 else:
779 compiler = "nonXla"
--> 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_fun
ction.py in _call(self, *args, **kwds)
805 # In this case we have created variables on the first
call, so we run the
806 # defunned version which is guaranteed to never create
variables.
--> 807 return self._stateless_fn(*args, **kwds) # pylint:
disable=not-callable
808 elif self._stateful_fn is not None:
809 # Release the lock early so that multiple threads can
perform the call
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/functio
n.py in __call__(self, *args, **kwargs)
2827 with self._lock:
2828 graph_function, args, kwargs =
self._maybe_define_function(args, kwargs)
-> 2829 return graph_function._filtered_call(args, kwargs) #
pylint: disable=protected-access
2830
2831 @property
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/functio
n.py in _filtered_call(self, args, kwargs, cancellation_manager)
1846
resource_variable_ops.BaseResourceVariable))],
1847 captured_inputs=self.captured_inputs,
-> 1848 cancellation_manager=cancellation_manager)
1849
1850 def _call_flat(self, args, captured_inputs,
cancellation_manager=None):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/functio
n.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1922 # No tape is watching; skip to running the function.
1923 return
self._build_call_outputs(self._inference_function.call(
-> 1924 ctx, args,
cancellation_manager=cancellation_manager))
1925 forward_backward =
self._select_forward_and_backward_functions(
1926 args,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/functio
n.py in call(self, ctx, args, cancellation_manager)
548 inputs=args,
549 attrs=attrs,
--> 550 ctx=ctx)
551 else:
552 outputs = execute.execute_with_cancellation(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute
.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle,
device_name, op_name,
---> 60 inputs, attrs,
num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
KeyboardInterrupt: