Observe-Analyze-Act Paradigm for Storage System Resource Arbitration Li Yin 1 Email: yinli@eecs.berkeley.edu Joint work with: Sandeep Uttamchandani Guillermo Alvarez John Palmer Randy Katz 1 1:University of California, Berkeley : IBM Almaden Research Center
Outline Observe-analyze-act in storage system: CHAMELEON Motivation System model and architecture Design details Experimental results Observe-analyze-act in other scenarios Example: network applications Future challenges
Need for Run-time System Management E-mail Application Web-server Application Data Warehousing SLA Goals Storage Virtualization (Mapping Application-data to Storage Resources) Static resource allocation is not enough Incomplete information of the access characteristics: workload variations; change of goals Exception scenarios: hardware failures; load surges.
Approaches for Run-time Storage System Management Today: Administrator observe-analyze-act Automate the observe-analyze-act: Rule-based system Complexity Brittleness Pure feedback-based system Infeasible for real-world multi-parameter tuning Model-based approaches Challenges: How to represent system details as models? How to create/evolve models? How to use models for decision making?
System Model for Resource Arbitration Host 1 Host Host n QoS Decision Module Interconnection Fabric Switch 1 Interconnection Fabric Switch m Throttling Points controller controller controller Input: SLAs for workloads Current system status (performance) Output: Resource reallocation action (Throttling decisions)
Our Solution: CHAMELEON Observe Analyze Act Current States Managed System Incremental Throttling Step Size Feedback Component Models Workload Models Models Current States Piece-wise Linear Programming Retrigger Throttling Value Designerdefined Policies Action Models Reasoning Engine Throttling Executor Knowledge Base
Knowledge Base: Component Model Objective: Predict service time for a given load at a component (For example: storage controller). Service_time controller = L( request size, read write ratio, random sequential ratio, request rate) An example of component model FAStT9, 3 disks, RAID Request Size 1KB, Read/Write Ratio., Random Access Controller Model Average Responce Time (ms) 5 15 1 5 5 1 15 5 3 35 Requests/Second
Component Model (cont.) Quadratic Fit S = 3., r =.3 Linear Fit S = 3., r =.739 Non-saturated case: Linear Fit S =.59, r =.99
Knowledge Base: Workload Model Objective: Predict the load on component i as a function of the request rate j Component_load i,j = W i,j ( workload j request rate) Example: Workload with KB request size,. read/write ratio and. sequential access ratio
Knowledge Base: Action Model Objective: Predict the effect of corrective actions on workload requirements Example: Workload J request Rate = A j (Token Issue Rate for Workload J)
Analyze Module: Reasoning Engine Formulated as a constraint solving problem Part 1: Predict Action Behavior: For each candidate throttling decision, predict its performance result based on knowledge base Part : Constraint Solving: Use linear programming technique to scan all feasible solutions and choose the optimal one
Reasoning Engine: Predict Result Chain all models together to predict action result Input: Token issue rate for each workloads Output: Expected latency Workload Request Rate Action Model Token Issue Rate Component Load latency Workload Request Rate Workload Model Workload 1 load Workload n Component Model
Reasoning Engine: Constraint Solving Formulated using Linear Programming Formulation: Variable: Token issue rate for each workload Objective Function: Minimize number of workloads violating their SLA goals Workloads are as close to their SLA IO rate as possible Example: Constraints: Workloads should meet their SLA latency goals Latency(%) 1 FAILED MEET 1 EXCEED LUCKY IOps(%) Minimize p ai p bi [ SLA i T(current_throughput i, t i )] SLA i where p ai = Workload priority p bi = Quadrant priority
Act Module: Throttling Executor Hybrid of feedback and prediction Reasoning Engine Invoked Ability to switch to rule-based (policies) when confidence value is low Ability to re-trigger reasoning engine Execute Designer Policies Yes Confidence Value < Threshold No Execute Reasoning Engine Output with Step-wise Throttling Proportional to Confidence Value Continue Throttling Re-trigger Reasoning Engine Analyze System States
Experimental Results Test-bed configuration: IBM x-series server (.GHz -way with GB memory, redhat server.1 kernel) FAStT 9 controller drives (RAID) Gbps FibreChannel Link Tests consist of: Synthetic workloads Real-world trace replay (HP traces and SPC traces)
Experimental Results: Synthetic Workloads Effect of priority values on the output of constraint solver 1.5 1.5 1.5 Latency(%) 1.5 w1 w w w3 Latency(%) 1.5 w1 w w w3 Latency(%) 1.5 w w1 w w3.5 1 1.5 IOps(%)(t1=,t=,t3=33.7%,t=9.17%) IOps(%)(t1=,t=,t3=5.9%,t=1.%).5 1 1.5 IOps(%)(t1=,t=.1%,t3=33.%,t=1.1%) (a) Equal priority (b) Workload priorities (c ) Quadrant priorities Effect of model errors on output of the constraint solver 1.5 1.5 Latency(%) 1.5 w1 w w w3 Latency(%) 1.5 w1 w w w3.5 1 1.5 IOps(%)(t1=,t=,t3=33.7%,t=9.17%) (a) Without feedback.5 1 1.5 IOps(%)(t1=,t=,t3=1.93%,t=1.5%) (b) with feedback
Experiment Result: Real-world Trace Replay Real-world block-level traces from HP (cello9 trace) and SPC (web server) A phased synthetic workload acts as the third flow Test goals: Do they converge to SLAs? How reactive the system is? How does CHAMELEON handle unpredictable variations?
Real-world Trace Replay Without CHAMELEON: With throttling 3 5 Cello9 SPC SYN 5 Cello9 SPC SYN 15 IOPS 15 IOPS 1 1 5 5 latency (ms) 5 1 15 5 3 35 5 5 1 1 Cello9 Cello SLA latency (ms) 5 1 15 5 3 35 5 5 1 1 Cello9 Cello SLA 5 1 15 5 3 35 5 5 5 1 15 5 3 35 5 latency (ms) 1 1 SPC SPC SLA 5 1 15 5 3 35 5 5 latency (ms) 1 5 1 15 5 3 35 5 SPC SPC SLA latency (ms) 1 1 1 SYN SYN SLA 5 1 15 5 3 35 5 5 latency (ms) 1 1 SYN SYN SLA 5 1 15 5 3 35 5
Real-world Trace Replay With periodic un-throttling Handling system changes 35 Cello9 SPC SYN 5 Cello9 SPC SYN 3 5 15 IOPS IOPS 15 1 1 5 5 5 1 15 5 3 35 5 5 5 1 15 latency 11 1 9 7 5 3 1 Cello9 Cello9 SLA 5 1 15 5 3 35 5 5 latency (ms) 7 5 3 1 Cello9 Cello9 SLA 5 1 15 latency 9 7 5 3 1 SPC SPC SLA 5 1 15 5 3 35 5 5 latency (ms) 1 SPC SPC SLA 5 1 15 latency 1 9 7 5 3 1 SYN SYN SLA 5 1 15 5 3 35 5 5 latency (ms) 11 1 9 7 5 3 SYN SYN SLA 5 1 15
Other system management scenarios? Automate the observe-analyze-act loop for other selfmanagement scenarios Example: CHAMELEON for network applications Example: A proxy in front of server farm 1: monitor network/server : behaviors establish/ maintain models for system behaviors 3: Analyze and make action decision : Execute or Distribute action decision to execution points
Future Work Better methods to improve model accuracy More general constraint solver Combining with other actions CHAMELEON in other scenarios CHAMELEON for reliability and failure
References L. Yin, S. Uttamchandani, J. Palmer, R. Katz, G. Agha, AUTOLOOP: Automated Action Selection in the ``Observe-Analyze-Act Loop for Storage Systems, submitted for publication, 5 S. Uttamchandani, L. Yin, G. Alvarez, J. Palmer, G. Agha, CHAMELEON: a self-evovling, fully-adaptive resource arbitrator for storage systems, to appear in USENIX Annual Technical Conference (USENIX 5), 5 S. Uttamchandani, K. Voruganti, S. Srinivasan, J. Palmer, D. Pease, Polus: Growing Storage QoS Management beyond a -year old kid, 3 rd USENIX Conference on File and Storage Technologies (FAST ),
Questions? Email: yinli@eecs.berkeley.edu