Are autoencoders and auto-associative neural networks the same thing?
Are there commonly used shapes for testing 4D+ classifiers? (such as half moons for 2D)
A good introduction/overview to neural network theory: Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville (Free, online copy)
Should I scale values before using them to train autoencoder?
Methods describe the temporal consistency of kernel density data
Request feedback on my unet training: is this underfitting or overfitting?
Training Autoencoder on time series with repeating pattern
Is a common that a KNN is more accurate than neural network?
Should I split my dataset if I'm solely trying to understand feature importance?
Is it acceptable to use pooling layers in variational autoencoders?
What are you learning right now?/What is the last thing you learned?
Can a model be overfit and underfit at the same time?
Virtual Event: ACM TechTalk, "LLMs: A New Way to Teach Programming" (Free, registration required)
Do you think it's easier to learn new concepts from video or written content?
Which statistical topics should be covered in an introduction to machine learning course?