Tim O’Mahony Technical Support #
• Previously…in Global Distributed Perforce • Don’t do that… do this! • Living on the Edge • Not just for multi-site, but everywhere. #
#
• Provide warm standby servers • Reduce load and downtime on a primary server • Provide support for build farms • Alternative to Proxy in some places #
• Pros – Process commands locally – Metadata and Archive files – Great for a remote site that’s browsing and submitting little. – Great for offloading on local fast LAN sites #
• Cons – Forwards all write commands to the Master Server – Trade-off vs Proxy; requires a higher level of machine provisioning and administrative considerations. #
“ Duplication of Services and Metadata only to go back to the master when I need it locally #
#
• If that replica could handle 98% of things? • Regular commands happen on the replica • Reduce remote users time waiting for version management tasks #
“ …cause Every Band, I mean Perforce Versioning Service, needs one (or two, maybe three) #
• Commit Server – Stores the canonical archives and permanent metadata. Similar to a Perforce master server, but may not contain all workspace information. • Edge Server – An edge server contains a replicated copy of the commit server data and a unique, local copy of some workspace and work-in- progress information. It can process read-only operations and operations that only write to the local data #
• Each edge server must be backed up separately from the commit server. • Exclusive locks are global. • Shelved on an edge server are not usually shared between edge servers. • You can promote a shelves in 14.1 • Auto-creation of users is not possible #
• Labels – global or local to edge • Triggers • Logs and Audits – Edge has it’s own • Unload depot may be different on the edge • Time Zone needs to be the same • Upgrade Commit and Edge at the same time. #
“ Benchmark of Perforce operations with 128 ms network latency between client and server. The file related commands operated against 7,000 files #
#
• Options – From Scratch – Utilize Existing Forwarding Replicas or Build Farms • Turn the Master into the Commit Server – ServerID and use p4 serverid to save it – Server Spec Services: commit-server. #
• Setup a replica • Services: edge-server • Take a filtered checkpoint – p4d – r $P4ROOT -K db.have,db.working,db.resolve,db.locks,db.revsh,db.workingx,db.resolvex -jd -z filtered.gz • Restore & Start up the Edge #
• Migrate Workspaces to the Edge – Have users Submit/Revert • Unload the workspace – p4 unload -c workspace • Reload the workspace on the edge – p4 reload -c workspace -p protocol:host:port • protocol:host:port refers to the commit or remote edge server the workspace is being migrated from. #
• Run “p4 -Ztag info” $ p4 -Ztag info … ... serverVersion P4D/DARWIN90X86_64/2014.1/821990 (2014/04/08) ... ServerID myEdge ... serverServices edge-server ... changeServer change.perforce.com.au:1666 ... serverLicense Perforce Software Pty Ltd 500 users (expires 2015/01/06) ... serverLicense-ip 127.0.0.1 ... caseHandling insensitive ... replica commit.perforce.com.au:1666 ... minClient 97.1: 1 . #
• Triggers – edge-submit • Like a pre-submit trigger – edge-content • mid-submit trigger on the edge server • after file transfer from the client to the edge server • prior to file transfer to the commit server. • At this point, the changelist is shelved . #
• Peeking (Improved concurrency through lockless reads) – p4 configure set db.peeking=2 • Consider Filtering if in remote areas • Backup Strategies • Build Servers chained of the Edge Server #
#
“ Local and distributed edge setup #
• Lots of Edges – Have the Commit just Commit • lbr.replication=share – Leverage Same Storage Solution • Commit and Edge point to same Storage – Automatic Promotion of Shelves • Clustered Perforce #
Tim O’Mahony is a Technical Support Manager from the Australian office at Perforce. He has a wide and diverse knowledge of Perforce products, specializing in its server technology since 2004. Before joining Perforce, he focused on network simulation and Java programming. #
Tim O’Mahony tomahony@perforce.com #
#
• Why would I consider this model? – When servers are in a Data Center – If primary Perforce server is very high spec. – With a large number of client workspaces – For very high number of transactions #
• Delegates load from primary server • Transparent to end users • Provides failover • Capacity is easy to increase • Improved service levels to end users – Increase capacity – Backup – Failover #
#
#
• Packaged command line utility, p4cmgr – Configuration – Control – Administration • Technologies – Linux – Saltstack – Python – Apache Zookeeper #
• p4cmgr <command> <options> • p4cmgr -- help optional arguments: --help show this help message and exit subcommands: {init,add,start,stop,restart,status,backup} init Initialise a new cluster and create a depot master add Add a service into a cluster start Start a service or services on a host or cluster stop Stop a service or services on a host or cluster restart Restart a service or services on a host or cluster status Get a simple debug style output for all nodes backup Perform a backup of the cluster #
• p4cmgr init <cluster> <node> [-s <service>] – Configures a new cluster – Installs salt-minion – Defines first Zookeeper – Deploys depot-master onto given node – Establishes baseline for subsequent Perforce servers #
• p4cmgr add <type> <node> • Supported types – Zookeeper – Depot standby – Workspace servers – Workspace router • Actions – Installs salt minion – Deploys relevant components onto node #
• p4cmgr start/stop – Brings cluster up/down in correct sequence – router started last and stopped first • p4cmgr restart – stop then start • p4cmgr status – Prints composition, configuration and status – Verbose output #
• p4cmgr backup – Still open for business – Processing load delegated to standby • Admin checkpoint on standby • Journal rotate on master • Still need off-site o/s backups – Checkpoint – Journal – Archives #
#
Darrell Robins is a Software Developer based in the Perforce UK office. He has been with Perforce since 2011, working mainly on web based projects such as OnDemand, Commons and Insights. Life before Perforce was a mixture of web, java and c programming. #
Darrell Robins drobins@perforce.com #
Recommend
More recommend