Ceph Administration and Troubleshooting

The Ceph Storage Administration course provides a hands on approach to understanding one of the most popular storage system solutions.  Ceph is a free-software storage platform that implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage.  This course teaches storage basics and then moves into the architecture of Ceph, administration and the analysis of the Ceph components in a production environment. This is a lab intensive 5-day course.

    No classes are currenty scheduled for this course.

    Call (919) 283-1653 to get a class scheduled online or in your area!

  1. Storage basics (NOT Ceph specific)
    • Block, Volume, Object storage
    • Distributed storage
    • Scalability
    • Durability
    • High Availability
    • Erasure Coding (theory and practice)
    • Linux and containers basics (review)
    • Containers overview
    • Network plugin: Calicom

  2. Ceph Architecture
    • Project goals, (the reason ceph was developed)
    • Use cases
    • High level architecture
    • BlueStore vs Filestore
    • Ceph Journaling
    • Integrations with ceph
    • Types:
      • RADOS
      • LibRADOS
      • RADOSGW
      • RBD
      • CEPH FS
    • Use cases and customers
    • Example applications
    • Prometheus and Ceph exporter

  3. Ceph component administration and analysis
    • How to use an RBD
    • Object Storage Daemons (OSDs)
    • Monitoring server (MONs)
    • Metadata Server (MDSs) (High level overview on this, no deep dive)
    • Ceph Algorithms (e.g. CRUSH)
    • Pools and Placement Groups (PGs)
    • Authentication (Cephx Protocol)
    • Prometheus and Ceph exporter

  4. Installation of a ceph components, clusters, and services
    • Hardware requirements
    • Software dependencies
    • Installation preflight
    • Install storage cluster
    • Deploying a router (router gateways)
    • Install ceph clients
    • Upgrades
    • Ceph journaling (EXT4)
    • Blue store
    • How to configure multiple OSDs per container (we will configure two)
    • 3 nodes per student
    • Persistent volumes (PVs) and persistent volume claims

  5. Additional kubernetes and helm considerations
    • Install and start helm
    • Add ceph-helm to helm local repos
    • Configure your ceph cluster
    • Configure rbac permissions
    • Label kubelets
    • Ceph deployment
    • Configure a pod to use a persistent volume from ceph
    • Logging

  6. Troubleshooting common problems
    • Measuring performance
    • Log analysis
    • Perform troubleshooting lab exercises that will test the students knowledge of architecture, logging, and administration by solving problems. (RBD only troubleshooting examples for this class)

Students will learn how to install and manage a Ceph cluster, utilize a CRUSH map, storage pools, mirroring and snapshots.  As well, students learn to monitor a cluster using common Ceph tools.  Additionally, students will deploy kubernetes on Ceph while troubleshooting their environments.  

  1. Introduction
  2. Welcome to Beachhead
  3. Navigating Your Nodes
  4. Installing Ceph
  5. Ceph Status
  6. Ceph Releases
  7. Ceph Benchmark Testing
  8. CephX and Keyrings
  9. Adding Users
  10. Storage Pools
  11. Delete a Storage Pool
  12. Replication
  13. Storage Erasure Coding
  14. Challenge 1
  15. Placement Groups
  16. Where Is My Data?

Basic Linux skills are helpful but not mandatory

Familiarity with a text editor like vi, vim, or nano is helpful.

Any company or individual who wants to advance their knowledge of the cloud environment, keep up with the most recent changes, and prepare themselves for the future of applications and services in the public or private cloud environment. Networking, general IT, devOps, systems, and storage folks would be a great fit.

Ready to Advance Your Career?

CONTACT US NOW!