hpss treefrog introduction
play

HPSS Treefrog Introduction HUF 2017 - PowerPoint PPT Presentation

HPSS Treefrog Introduction HUF 2017 http://www.hpss-collaboration.org Disclaimer Forward looking information including schedules and future software reflect current planning that may change and should not be taken as commitments by IBM or the


  1. HPSS Treefrog Introduction HUF 2017 http://www.hpss-collaboration.org

  2. Disclaimer Forward looking information including schedules and future software reflect current planning that may change and should not be taken as commitments by IBM or the other members of the HPSS collaboration. http://www.hpss-collaboration.org 2

  3. HPSS Treefrog Goals Manage and share data across the life of your mission’s projects, procurements, infrastructure, deployment, user access, and staffing cycles. Store, protect, and error correct project data across a wide variety of local and remote classic and cloud storage products and services. Effectively exploit and scale tape and other high latency storage by using data containers to group and store files and data objects! http://www.hpss-collaboration.org 3

  4. A Single User Namespace Managed across industry storage devices and solutions called storage endpoints: § Cloud § HSMs including HPSS § Optical § Tape § File system § Disk § SSD Managed across data repositories § Storage endpoints provide real storage for data repositories. § Repositories are wholly contained inside a storage endpoint. http://www.hpss-collaboration.org 4

  5. Manage Data by Project § Projects provide the nexus between data management and data organization. § Administrators manage project policies including § s torage quotas § Storage access § Service limits § Access authorization § Users store data within the projects and group data within data containers (called managed data sets) § Data are share amongst project members (allowed users) § Project members will have different roles: § Owner, reader, writer, modify, delete § Data will be owned by the project. § Insures data will always have an owner. § Allows for easy on and off boarded of users. http://www.hpss-collaboration.org 5

  6. Policy Defined Storage Management § Policies determine how and where data are stored. § Make multiple copies of data: § At ingest from the golden copy § After a delay from a managed copy § Control data recall: § Assign primary recall copy § Assign failover copies § Block recall of copies from storage endpoint requiring administrator authorization http://www.hpss-collaboration.org 6

  7. Smart Data Storage § Manage data containers not individual data objects and files. § Grouped data will be stored as an immutable collection of files or objects called a managed data set. § As a bonus, grouping data benefits high latency storage. § Decreases the number of tape syncs. § Allows for all data to be recalled with a single IO. § Data will be grouped into date sets using a data retention format. § The Treefrog interface will make grouping data simple. http://www.hpss-collaboration.org 7

  8. Parallel Data Transfer § Managed Data Sets may be broken into smaller Repository 1 Manifest Transfer Huge Dataset fragments. Object Fragment Fragment #1 #1 § Based on storage policy settings. Huge Object Transfer § Fragments are contiguous sections Repository 2 Huge Dataset of Treefrog managed data set that Object Fragment are distributed across repositories. Fragment #2 #2 Large § Maximum degree of Object Transfer Repository 3 parallelism will be based on Small Dataset Large Fragment Object Small #3 configuration. Small Small http://www.hpss-collaboration.org 8

  9. Data Redundancy via Erasure Coding § Parity fragments will be generated based on storage Repository 1 Manifest Transfer policy settings. Huge Dataset Object Fragment Fragment #1 § The number of fragments that may #1 Huge be recovered will be based on the Object Transfer number of parity fragments created. Repository 2 Huge Dataset Object Fragment Fragment #2 #2 Large Object Transfer Repository 3 Small Dataset Large Fragment Object Small #3 Small Small Repository 4 Dataset Parity Fragment http://www.hpss-collaboration.org 9

  10. More About Storage Policies § A copy of a data set may First copy be: Manifest Transfer Repository Dataset Fragment Fragment § Stored to a single repository #1 #1 Huge § Fragmented to a single repository Object Transfer Repository Dataset Fragment § Fragmented across multiple Fragment #2 #2 repositories Large Transfer Object § Changing storage policies Repository Large Dataset Object Fragment Small #3 only moves data when Small Small Small Repository required. Dataset Parity Fragment Second Copy Repository http://www.hpss-collaboration.org 10

  11. Simple Insertion of New Storage Endpoints § Copy agent based on Apache Jclouds Blobstore. § Copy agent interface will be extensible. § AWS, Google Cloud Storage, Azure, and Rackspace already supported. § HPSS interface is planned. § Adding a storage endpoint will be as simple as adding a new Jclouds interface. http://www.hpss-collaboration.org 11

  12. Data and Metadata Verification § Each fragment will be stored with with a checksum. § Treefrog can verify both the metadata and data of managed data sets. § Administrators use storage policies to control the verification settings and subsequent overhead. § Metadata Verification will verify the location, checksum, and size of each fragment in the repository match the value Treefrog has stored. § Metadata Verification will not access the data. § Data Verification will verify the checksum of each fragment. § Data Verification may access the data. § Treefrog will use the built in verification on storage systems that have it. § Treefrog will stage fragments to verify checksum. http://www.hpss-collaboration.org 12

  13. All of that in an Extreme Scale Architecture § Scale-out design allows incremental horizontal growth by adding new servers and devices. § Load Balancing using HAProxy. § Agents may run at the client to take advantage of available processing power and reduce store and forwards. http://www.hpss-collaboration.org 13

  14. But wait there’s more!!! In addition HPSS Treefrog will: § Decrease software development delivery time. § Decrease software deployment time. § Enable user installation. § Increase timely access to trending technology. § Increase use of trending programming language skills and open software. § Avoid impact to on-going HPSS core services development. http://www.hpss-collaboration.org 14

  15. Treefrog will be an HPSS Interface Spectrum Scale Spectrum Scale FUSE Parallel HPSS Client API for 3 rd party applications HPSS Treefrog SwiftOnHPSS Interface Filesystem FTP Client API interface & services Massively scalable global HPSS namespace enabled by DB2 RHEL Core Server & Mover computers Intel Power Extreme-scale high-performance automated HSM Disk Tape Cloud, Object Hardware & File Vendor Storage and Neutral Block or Filesystem Disk Tiers Services IBM Ÿ Oracle Ÿ Spectra Logic Enterprise Ÿ LTO Tape including LTFS http://www.hpss-collaboration.org 15

  16. Treefrog will use Existing Technologies Existing Products § Only configuration changes are required Extendable Functionality § Open Source code or library Treefrog Specific Code § Code specific to the Treefrog application § Requires from-scratch development http://www.hpss-collaboration.org 16

  17. Treefrog will use Existing Technologies http://www.hpss-collaboration.org 17

  18. Questions? http://www.hpss-collaboration.org 18

Recommend


More recommend