Single-node cluster requires some special configuration, but not too complicated. First, there can only be one MON in such a cluster. Second, the default "CRUSH chooseleaf" must be set to 0. Third, "size" and "min_size" pool parameters must be set appropriately, depending on the number of OSDs in the cluster.
Example ceph.conf for single-node cluster with two OSDs:
[global] fsid = 44af0c2b-0423-4823-86f5-b21b5ed782b0 mon_initial_members = vanguard2 mon_host = 10.100.12.15 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd_crush_chooseleaf_type = 0 osd_pool_default_size = 1 osd_pool_default_min_size = 1
For a single OSD, change the last line to
osd_pool_default_min_size = 0. Ceph wants to provide failure resistance and will normally go into HEALTH_WARN if the cluster is not configured to sustain the failure of at least one OSD without data loss.