service centric networking with scaffold
play

Servicecentricnetworking withSCAFFOLD MichaelJ.Freedman - PowerPoint PPT Presentation

Servicecentricnetworking withSCAFFOLD MichaelJ.Freedman PrincetonUniversity withMatveyArye,PremGopalan,StevenKo, ErikNordstrom,JenRexford,andDavidShue


  1. Service‐centric
networking
 with
SCAFFOLD
 Michael
J.
Freedman
 Princeton
University
 with
Matvey
Arye,
Prem
Gopalan,
Steven
Ko,

 Erik
Nordstrom,
Jen
Rexford,
and
David
Shue


  2. From
a
host‐centric
architecture
 1960s


  3. From
a
host‐centric
architecture
 1960s
 1970s


  4. From
a
host‐centric
architecture
 1960s
 1970s
 1990s


  5. To
a
service‐centric
architecture
 1960s
 1970s
 1990s
 2000s


  6. To
a
service‐centric
architecture
 • Users
want
services,
agnosRc
of
actual
host/locaRon
 • Service
operators
need:


replica
selecRon
/
load
 balancing,
replica
registraRon,
liveness
monitoring,
 failover,
migraRon,
…


  7. Hacks
to
fake
service‐centrism
today

 Layer
4/7: 
DNS
with
small
TTLs
 
 
HTTP
redirects
 
 
Layer‐7
switching
 Layer
3:
 
IP
addresses
and
IP
anycast
 
 
 
Inter/intra
rouRng
updates
 Layer
2:
 
VIP/DIP
load
balancers
 
 
VRRP,

ARP
spoofing
 +
Home‐brewed
registraRon,
configuraRon,
monitoring,
…


  8. To
a
service‐centric
architecture
 • Users
want
services,
agnosRc
of
actual
host/locaRon
 • Service
operators
need:


replica
selecRon
/
load
 balancing,
replica
registraRon,
liveness
monitoring,
 failover,
migraRon,
…
 • Service‐level
anycast
as
basic
network
primiRve


  9. Two
high‐level
quesRons
 • Moderate
vision:

Can
network
support
aid
self‐ configuraRon
for
replicated
services?
 • Big
vision:

Should
“service‐centric
networking”
 become
the
new
thin
waist
of
Internet?


  10. Naming
as
a
“thin
waist”
 • Host‐centric
design:

TradiRonally
one
IP
per
NIC
 – Load
balancing,
failover,
and
mobility
complicates
 – Now:

virtual
IPs,
virtual
MACs,
…

 • Content‐centric
architecture:

Unique
ID
per
data
object
 – DONA
(Berkeley),
CCN
(PARC),
…
 • SCAFFOLD:

Unique
ID
per
group
of
processes
 – Each
member
must
individually
provide
full
group
funcRonality
 – Group
can
vary
in
size,
distributed
over
LAN
or
WAN


  11. Object
granularity
can
vary
by
service
 Fixed
Bit‐length
 SCAFFOLD
 =
 K-bit Admin Prefix Machine-readable ObjectID ObjectID
 Google YouTube Service =
 IZ
–

“Somewhere

 Google IZ – “Somewhere” video over
the
rainbow” 
 =
 Facebook Partition 243 =
 Memcache
Par**on 
 =
 Comcast Mike’s Laptop

  12. SCAFFOLD
as
… 
 – Clean
slate
design
 – MulR‐datacenter
architecture
for
 single
administraRve
domain
 • Deployed
over
legacy
networks
 • Few
/
no
modificaRons
to
applicaRons


  13. Target:
Single
administraRve
domain
 DC 1 DC 2 Backbone X
 Y
 Y
 X
 Y
 X
 Internet • Datacenter
management
more
unified,
simple,
centralized
 • Host
OS
net‐imaged
and
can
be
fork‐lii
upgraded
 • Already
struggling
to
provide
scalability
and
service‐centrism
 • Cloud
compuRng
lessen
importance
of
fixed,
physical
hosts


  14. Goals
for
Service‐Centrism
 • Handling
replicated
services
 – Control
over
replica
selecRon
among
groups
 – Control
of
network
resources
shared
between
groups
 – Handling
dynamics
among
group
membership
and
deployments
 • Handling
churn
 – Flexibility:

From
sessions,
to
hosts,
to
datacenters
 – Robustness:


Largely
hide
from
applicaRons
 – Scalability:

Local
changes
shouldn’t
need
to
update
global
info
 – Scalability:

Churn
shouldn’t
require
per‐client
state
in
network 
 – Efficiency:


Wide‐area
migraRon
shouldn’t
require
tunneling


  15. Clean-Slate Design

  16. Principles
of
SCAFFOLD
 1. Service‐level
naming
exposed
to
network
 2. Anycast
with
flow
affinity
as
basic
primiRve
 3. MigraRon
and
failover
through
address
remapping
 Addresses
bound
to
physical
locaRons
(aggregatable)
 – Flows
idenRfied
by
each
endpoint,
not
pairwise
 – Control
through
in‐band
signalling;
stateless
forwarders
 – 4. Minimize
visibility
of
churn
for
scalability 
 Different
addr’s
for
different
scopes
(successive
refinement)
 – 5. Tighter
host‐network
integraRon
 Allowing
hosts
/
service
instances
to
dynamically
update
network
 –

  17. Principles
of
SCAFFOLD
 1. Service‐level
naming
exposed
to
network
 2. Anycast
with
flow
affinity
as
basic
primiRve


  18. Principles
of
SCAFFOLD
 1. Service‐level
naming
exposed
to
network
 2. Anycast
with
flow
affinity
as
basic
primiRve
 

SCAFFOLD
address
 SocketID
 Admin Prefix Object Name SS Label Host Label ObjectID
 FlowID
 (
i
)


Resolve
ObjectID
to
an
instance
FlowLabel
 (
ii
)

Route
on
instance
FlowLabel
to
the
desRnaRon
 (
iii
)

Subsequent
flow
packets
use
same
FlowLabel


  19. Principles
of
SCAFFOLD
 1. Service‐level
naming
exposed
to
network
 2. Anycast
with
flow
affinity
as
basic
primiRve
 

SCAFFOLD
address
 SocketID
 Admin Prefix Object Name SS Label Host Label ObjectID
 FlowID


  20. Decoupled
flow
idenRfiers
 ObjectID Flow Labels SocketID Who Where Which conversation 3. MigraRon
and
failover
through
address
remapping
 4. Minimize
visibility
of
churn
for
scalability 
 

SCAFFOLD
address ObjectID Flow Labels SocketID ObjectID Flow Labels SocketID Src FlowID Dst FlowID

  21. Manage
migraRon
/
failover
through

 in‐band
address
remapping
 ObjectID SS10 : 40 : 20 SS8 : 30 SocketID Who Where Which conversation (
i
) 
Local
end‐point
changes
locaRon,
assigned
new
address
 (
ii
) 
ExisRng
connecRons
signal
new
address
to
remote
end‐points
 (
iii
) 
Remote
network
stack
updated,
applicaRon
unaware
 

SCAFFOLD
address ObjectID Flow Labels SocketID ObjectID Flow Labels SocketID Src FlowID Dst FlowID

  22. Minimize
visibility
of
churn
through
 successive
refinement
 ObjectID SS10 : 40 : 20 SS4 : 50 SocketID Where SS
10
 40
 20
 5
 Wide‐Area


  23. Minimize
visibility
of
churn
through
 successive
refinement 
 • 

Scalability:
 
–
Local
churn
only
updates
local
state 
 
 
 
–
Addresses
remain
hierarchical
 • 

Info
hiding:

Topology
not
globally
exposed 
 SS
10
 SRC LocalHost Safari Client SS 4 50 3 40
 DST Google YouTube Svc SS 10 40 20 5 20
 MulRple
levels
 5
 Wide‐Area
 SS
4
 of
refinement
 Arbitrary
Subnet
/
 Address
Structure


  24. Integrated
service‐host‐network
management
 Network Controller Object ResoluRon
 Router Label Label Ac*on
 Network
 Router Router Control
Msg
 B 3 RouRng
 A 2 netlink
up
 join
(2)
 netlink
down
 leave
(2)
 Host Host Label bind
(fd,
A)
 register
(A,
2)
 Router close
(fd)
 unregister
(A,
2)


  25. Integrated
service‐host‐network
management
 Self‐configuraRon

+

adapRve
to
churn
 Network Controller Object ResoluRon
 Router Label Label Ac*on
 Network
 Router Router Control
Msg
 B 3 RouRng
 A 2 netlink
up
 join
(2)
 netlink
down
 leave
(2)
 Host Host Label bind
(fd,
A)
 register
(A,
2)
 Router close
(fd)
 unregister
(A,
2)


  26. Using SCAFFOLD: Network‐level
protocols
 and
network
support


  27. ApplicaRon’s
network
API
 Today

(IP
/
BSD
sockets)
 SCAFFOLD
 fd = open(); fd = open(); Datagram:
 Unbound
datagram:
 sendto (IP:port, data) sendto (objectID, data) Stream:
 Bound
datagram:
 connect (fd, IP:port) connect (fd, objectID) send (fd, data); send (fd, data); IP:


ApplicaRon
sees
network,
network
doesn’t
see
app
 SCAFFOLD:

Network
sees
app,
app
doesn’t
see
network


  28. Unbound
Flows
 Label Router 1 Object Router Label Router 2 sendto
(B)
 D A join
 T A 
 SRC A 2 0 bind(B)
 DST B 0 0 D A T A 
 SRC A 2 0 sendto
(A)
 DST B 3 0 SRC B 3 0 DST A 0 0 ObjectID Flow Label SocketID B 3 A 2 3 p1 B 3 4 p2 OR B 4 A 2 B 4 LR 1 LR 2

  29. Half‐Bound
Flows
 Label Router 1 Object Router Label Router 2 sendto
(B)
 D A join
 T A 
 SRC A 2 0 bind(B)
 DST B 0 0 D A T A 
 SRC A 2 0 sendto
(A,
 flags )
 DST B 3 0 SRC B 3 0 DST A 2 0 ObjectID Flow Label SocketID B 3 A 2 3 p1 B 3 4 p2 OR B 4 A 2 B 4 LR 1 LR 2

  30. Bound
Flows
 Label Router 1 Object Router Label Router 2 connect(B)
 join
 S Y N 
 SRC A 2 765 bind(B)
 DST B 0 0 S Y N 
 listen()
 SRC A 2 765 DST B 3 0 SRC B 3 234 SYN/ACK
 DST A 2 765 SRC A 2 765 DST B 3 234 ACK
 ConnecRon
 Bound
 OR A 2 B 3 LR 1 LR 2

Recommend


More recommend