$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\disteq}{\overset{d}{=}} %linalg \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\detmat}[1]{\begin{vmatrix} #1 \end{vmatrix}} \newcommand{\spanb}[1]{\text{span}\{ #1 \}} \DeclareMathOperator{\conv}{conv} % convex hull \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vectorize}{vec} \DeclareMathOperator{\matricize}{mat} \DeclareMathOperator{\adj}{adj} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\proj}{proj} % brackets, norms, cardinalities \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\inner}[1]{\left\langle #1 \right\rangle} \newcommand{\innercpy}[1]{\inner{ #1, #1 }} \newcommand{\norm}[1]{\left\ #1 \right\} \newcommand{\abs}[1]{\left{#1}\right} \newcommand{\card}[1]{\left\vert{#1}\right\vert} % math vectors \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \newcommand{\vzero}{\textbf{0}} \newcommand{\vone}{\textbf{1}} \newcommand{\valpha}{{\boldsymbol\alpha}} \newcommand{\vepsilon}{{\boldsymbol\epsilon}} \newcommand{\vnu}{{\boldsymbol\nu}} \newcommand{\vpi}{{\boldsymbol\pi}} \newcommand{\veta}{{\boldsymbol\eta}} \newcommand{\vsigma}{ {\boldsymbol\sigma}} \newcommand{\vbeta}{ {\boldsymbol\beta}} \newcommand{\vtheta}{ {\boldsymbol\theta}} \newcommand{\vdelta}{ {\boldsymbol\delta}} \newcommand{\vlambda}{ {\boldsymbol\lambda}} \newcommand{\vmu}{ {\boldsymbol\mu}} % common math sets \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\F}{\mathbb{F}} \newcommand{\T}{\mathbb{T}} % limits \def\sumn{\sum_{n=0}^\infty} \def\limn{\lim_{n\rightarrow\infty}} \def\prodn{\prod_{n=0}^\infty} % mathcal \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcB}{\mathcal{B}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcD}{\mathcal{D}} \newcommand{\mcE}{\mathcal{E}} \newcommand{\mcF}{\mathcal{F}} \newcommand{\mcG}{\mathcal{G}} \newcommand{\mcH}{\mathcal{H}} \newcommand{\mcI}{\mathcal{I}} \newcommand{\mcJ}{\mathcal{J}} \newcommand{\mcK}{\mathcal{K}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcM}{\mathcal{M}} \newcommand{\mcN}{\mathcal{N}} \newcommand{\mcO}{\mathcal{O}} \newcommand{\mcP}{\mathcal{P}} \newcommand{\mcQ}{\mathcal{Q}} \newcommand{\mcR}{\mathcal{R}} \newcommand{\mcS}{\mathcal{S}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcU}{\mathcal{U}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcW}{\mathcal{W}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcZ}{\mathcal{Z}} % distribs, probability \newcommand{\disteq}{\overset{d}{=}} \newcommand\independent{\perp \!\!\! \perp} \DeclareMathOperator{\Laplace}{Laplace} \DeclareMathOperator{\Poisson}{Poisson} \DeclareMathOperator{\Exponential}{Exponential} \DeclareMathOperator{\Multinomial}{Multinomial} \DeclareMathOperator{\Bernoulli}{Bernoulli} \DeclareMathOperator{\Categorical}{Categorical} \DeclareMathOperator{\Uniform}{Uniform} \DeclareMathOperator{\Binomial}{Binomial} \DeclareMathOperator{\Hypergeometric}{Hypergeometric} \DeclareMathOperator{\GammaDist}{Gamma} \DeclareMathOperator{\NegativeBinomial}{NegativeBinomial} \DeclareMathOperator\sub{sub} \renewcommand{\d}[1]{\mathop{\mathrm{d} #1 }} \newcommand{\dkl}[2]{\mathop{D_\mathrm{KL}}\left({#1}\;\middle\\;{#2}\right)} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} %operators \DeclareMathOperator{\power}{{\mathcal{P}}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator\mathProb{\mathbb{P}} \DeclareMathOperator\mathExp{\mathbb{E}} \DeclareMathOperator*\mathExpUnder{\mathbb{E}} \DeclareMathOperator*\fat{fat} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \newcommand{\E}{\mathExp} % need to overwrite stupid paragraph symbol \newcommand{\set}[2]{ \left\{ #1 \,\middle\, #2 \right\} } \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle\, #2 \right] } \renewcommand{\CP}[2]{ \mathProb\left\{ #1 \,\middle\, #2 \right\} } $$
The Semaphore Barrier
I wanted to share an interview question I came up with. The idea came from my operating and distributed systems classes, where we were expected to implement synchronization primitives and reason about concurrency, respectively.
Synchronization primitives can be used to coordinate across multiple threads working on a task in parallel.
Most primitives can be implemented through the use of a condition variable and lock, but I was wondering about implementing other primitives in terms of semaphores.
Implementing a barrier with semaphores, and just semaphores, might seem a bit contrived but I think this paints a good picture of what it’s like to reason about concurrency.
Introduction to the Primitives
Semaphores
Semaphores are a type of synchronization primitive that encapsulate the idea of “thresholding”.
A semaphore s
has two operations: s.up()
and s.down()
. A semaphore also has an internal nonnegative number representing its state. A thread calling s.down()
is allowed to continue only if this number is positive, in which case the number is atomically decremented and the thread goes on with its work.
s.up()
doesn’t guarantee any blocking either, and raises the number.
If we wanted to make sure that only 5 threads executing f()
(a function that we implement) ever printed hello
, and we somehow preemptively set the state of semaphore s
to 5
, then the following code would work:
def f():
s.down()
print("hello\n", end='')
Regardless of how many threads call f
at the same time, because of the atomic guarantees on s
’s state, only 5 threads will be let through to print hello
.
If at any time in the above we had at least 6 threads call f
and any thread also call s.up()
at some point, eventually 6 hello
s would be printed.
In the following, we’ll assume the OS provides a magic semaphore implementation.
Barriers
A barrier is similar to a semaphore, but it’s meant to be a oneoff, well, barrier. A barrier is preconfigured to accept n
threads. Its API is defined by b.wait()
, where a thread waits until n1
other threads are also waiting on b
, an only then are the threads allowed to continue.
Barriers are useful when we want to coordinate some work. Suppose we have 2 threads, who want to draw a picture together. Say thread 0 can only draw red and thread 1 can only draw blue. But no two threads can draw on the same half of the screen at the same time.
Assuming b
has been initialized with n=2
, the following would work:
thread 0  thread 1  

draw left half  draw right half  
b.wait() 
b.wait() 

draw right half  draw left half 
Now, no matter which thread is faster, we’ll never violate the condition that 2 threads write on the same half of the screen.
The only way thread 0 can be on the left half is if it hasn’t crossed the barrier wait
yet. The only way thread 1 can be on the left half is if it crossed the barrier, but since b=2
, it can only cross the barrier if thread 1
is waiting, in which case it must have finished drawing on the left half!
Similar logic can be applied to the right side; in other words, no side of the screen is ever shared by two threads at any given time, regardless of how fast one thread is compared to the other.
The Challenge
Warmup
Our goal will be to implement a barrier (namely, fill in what b.wait()
does for a given n
). Let’s focus on the case where we only have n=2
threads.
This can be done with two semaphores.
Solution to the Warmup
As you may have guessed, the only nontrivial semaphore arrangement works. From here on, we let s(i)
be the i
th semaphore, initialized with state 0. Similarly, \(t_i\) will refer to the \(i\)th thread. Here’s what we would want b.wait()
to do on each thread.
t0  t1  

s(0).up 
s(1).up 

s(1).down 
s(0).down 
Indeed  \(t_0\) can’t advance past b.wait()
unless s(1)
is up
ped, which only happens if \(t_1\) call b.wait()
. Symmetric logic shows that our barrier, if implemented to execute those instructions on each thread, will similarly stop s(2)
from advancing without s(1)
being ready.
The General Problem
Now, here’s the main question:
Can we implement an arbitrary barrier, capable of blocking n
threads, with semaphores and no control flow? With control flow?
Now, can we do so efficiently, using as few semaphores as possible? In as little time per thread as possible?
Attempt: Extending the 2thread Case
Let’s try extending our approach from the 2thread case. Maybe we can just use 3 semaphores now, but using the “cycle” that seems to be built in the 2thread example?
t0  t1  t2  

s(0).up 
s(1).up 
s(2).up 

s(1).down 
s(2).down 
s(0).down 
But this won’t work, unfortunately: suppose \(t_0\) is running slow. Both \(t_1\) and \(t_2\) finish well ahead of time, and each calls b.wait()
. Then \(t_2\) ups s(2)
, after which \(t_1\) can pass through without waiting for \(t_0\) to call b.wait()
, a violation of our barrier behavior.
Answer
Not so fast! Try it yourself! How efficient is your solution? There’s a couple of them, in increasing order of difficulty. The following list describes the asymptotic space complexity (number of semaphores used) and time complexity (per thread).
 \(O(n^2)\) space and \(O(n)\) time
 \(O(n)\) space and \(O(n)\) time
 \(O(n)\) space and \(O(1)\) average time, \(O(n)\) worstcase time
 \(O(n)\) space and \(O(1)\) worstcase time
 \(O(1)\) space and \(O(1)\) worstcase time
A Note on Thread IDs
The fact that we can write different code for each of the threads to execute in the above examples might seem a bit questionable. However, we can get around this by assuming that we have access to thread IDs. As long as we can procure a thread’s procedure given just its ID (and the function procuring such a procedure doesn’t take \(O(n^2)\) space), we should be fine.
Even if the thread ID isn’t available, we can use an atomic counter, which assigns effective thread IDs based on which thread called b.wait()
first:
atomic = AtomicInteger(0)
def wait():
tid = atomic.increment_and_get()
...