A “manifold” is a thingy that looks locally like a vector space. For the purposes of this conversation, the important thing is that in a vector space, parallel lines always stay a constant distance from each other.
Now, each point in a manifold has a specific *curvature*. It’s not important was that means precisely, but a in a region with positive curvature, parallel lines get closer together as you go into the distance (and example of a manifold with positive curvature everywhere is a sphere; think of lines of longitude). In a region of negative curvature, parallel lines get further away from each other (an example of a manifold with negative curvature everywhere is a hyperboloid).
A manifold is said to be hyperbolic if it has negative curvature at all points. A hyperbolic manifold fitting certain criteria may be called a hyperbolic space for historical reasons.
Most machine learning problems involve consider your data points to be points in some high-dimensional vector space. Under certain circumstances, considering them to be points in a hyperbolic space may be advantageous. Hyperbolic spaces specifically are often used for data that come in the shape of trees, because this allows you to have a constant/near constant distance between each point and its neighbors, even though you’re in a tree (because basically as you go into the distance, “space gets bigger”).
Learning is also done on manifolds that are neither hyperbolic nor ellipsoidal.
Latest Answers