BSDCan2015 - ZI
BSDCan 2015
The Technical BSD Conference
Speakers | |
---|---|
Warner Losh |
Schedule | |
---|---|
Day | Talks #1 - 12 June - 2015-06-12 |
Room | DMS 1140 |
Start time | 16:30 |
Duration | 01:00 |
Info | |
ID | 580 |
Event type | Lecture |
Track | Hacking |
Language used for presentation | English |
I/O Scheduling in CAM
SSD have many unique characteristics not present in spinning drives. Applications have different access patterns and desire different performance trade offs. Geom offers some scheduling facilities, but they are hampered by no visibility into the underlying device's characteristics. Scheduling I/O in CAM allows peripheral drivers to use their detailed knowledge of a drive to schedule I/Os that are optimal for the application's needs (with hints from the application)
Netflix operates a small fleet of Video servers for its Video Streaming service . There are two main kinds of server used in our operations. We have a storage appliance, which is used for long-tail access and filling other servers. We have a Flash appliance for serving popular titles. Our service has a certain amount of change each day, as titles change in popularity, contracts expire or come on line, etc. While our workload is read mostly, we also need to write and trim the drive from time to time. With flash drives we found any sustained write activity above a certain level lead to a sudden decrease in the read performance, reducing our effective capacity at times when this happens. By clever scheduling, one can reduce these effects to keep read performance good, but write performance will suffer. The traditional scheduler didn't allow any efficient way to do this, short of write throttling in the application. While this does help mitigate things, when there's many threads or processes acting in parallel it can be hard for the application to coordinate everything, and the many layers between the application and the disk can interfere with even perfect coordination. Moving the throttling to the lowest layer in the system helps smooth out the bumps, as well as adapt dynamically to the changing workloads (you can write more, if you need to read less, for example).