Ceph provides unified block, object, and file storage. CRUSH algorithm places data without central lookup. Objects map to placement groups (PGs), PGs map to OSDs (storage daemons).
No single point of failure: monitors maintain cluster state via Paxos consensus. Ceph scales horizontally and self-heals by re-replicating when OSDs fail.