So you've probably learned that if u is an eigenvector, then multiplying u by any scalar gives you another eigenvector with the same eigenvalue. That means that the set of all a*u where a is any scalar forms a 1-dimensional space (a line if this is a real vector space). This is an eigenspace of dimension one. The full definition of an eigenspace is as the set of all eigenvectors of a given eigenvalue. Now, if an eigenvalue has multiple independent eigenvectors, then the set of all eigenvectors for that eigenvalue is is still a linear space, but of dimension more than one. So for a real vector space, if an eigenvalue has two sets of independent eigenvectors, its eigenspace will be a 2-dimensional plane.
I "learned" about them for quantum computing (I think that's mostly linear algebra). I was kind of disappointed they're just vectors I somehow expected them to do something weird (based off the name).
Later back at the lab, after trusting this guy, all the boys with their white robes and clipboards scratching their heads: "The Eigenvalue is off the charts!"
Eigenvectors, values, spaces etc are all pretty simple as basic definitions. They just turn out to be essential for the proofs of a lot of nice results in my opinion. Stuff like matrix diagonalization, gram schmidt orthogonalization, polar decomposition, singular value decomposition, pseudoinverses, the spectral theorem, jordan canonical form, rational canonical form, sylvesters law of inertia, a bunch of nice facts about orthogonal and normal operators, some nifty eigenvalue based formulas for the determinant and trace etc.