Я пробовал использовать разные оптимизаторы и потери, варьируя количество Dense слоев, единиц измерения и функций активации. Вот моя модель
Код: Выделить всё
model = keras.Sequential([
keras.Input((1000,)),
normalize,
keras.layers.Dense(1000, activation=keras.activations.relu),
keras.layers.Dense(200, activation=keras.activations.relu),
keras.layers.Dense(50, activation=keras.activations.relu),
keras.layers.Dense(1, )
])
model.compile(optimizer='adam', loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])
losses = model.fit(train_data, train_labels, epochs=80, validation_split=0.2)
Код: Выделить всё
Epoch 77/80
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step - accuracy: 0.3374 - loss: nan - val_accuracy: 0.1786 - val_loss: nan
Epoch 78/80
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step - accuracy: 0.3603 - loss: nan - val_accuracy: 0.1786 - val_loss: nan
Epoch 79/80
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step - accuracy: 0.3041 - loss: nan - val_accuracy: 0.1786 - val_loss: nan
Epoch 80/80
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step - accuracy: 0.3645 - loss: nan - val_accuracy: 0.1786 - val_loss: nan
Подробнее здесь: https://stackoverflow.com/questions/790 ... lways-same