Honorific Security: Efficient Two-Party Computation with Offloaded
Arbitration and Public Verifiability
Tianxiang Dai
1 a
, Yufan Jiang
2,5 b
, Yong Li
3 c
, J
¨
orn M
¨
uller-Quade
2,5
and Andy Rupp
4,5 d
1
Lancaster University Leipzig, Germany
2
Karlsruhe Institute of Technology, Germany
3
Huawei European Research Center, Germany
4
University of Luxembourg, Luxembourg
5
KASTEL Security Research Labs, Germany
Keywords:
Two-Party Computation, Security Notion, Efficient Protocols, Multi-Party Computation, Honorific Security.
Abstract:
In the secure two-party computation (2PC), an adversary is often categorized as semi-honest or malicious,
depending on whether it follows the protocol specifications. Covert security (Aumann and Lindell, 2010) first
looks into the “middle ground”, such that an active adversary who cheats will be caught with a predefined
probability. Other security notions, such as publicly auditable security (Baum et al., 2014) and (robust) ac-
countability family (K
¨
usters et al., 2010; Graf et al., 2023; Rivinius et al., 2022), achieve public verifiability
as a stronger security guarantee by relying on heavy offline and online constructions with zero knowledge
proofs and (or) a bulletin board functionality. In this work, we propose a new security notion called honorific
security, where an external arbiter can identify the cheater without a bulletin board. Specifically, we delay and
outsource the verification steps to the arbiter, so that the original online computation is thus accelerated. We
show that a maliciously secure garbled circuit (GC) (Yao, 1986) protocol can be constructed with only slightly
more overhead than a passively secure protocol. Our construction performs up to 2.37 times and 13.30 times
as fast as the state-of-the-art protocols with covert and malicious security, respectively.
1 INTRODUCTION
In secure two-party computation, two parties are will-
ing to jointly compute a function f without revealing
their private input {x
1
,x
2
} to each other. 2PC proto-
cols should guarantee that besides the output of the
given function {y
1
,y
2
} = f (x
1
,x
2
), nothing else can
be learned (privacy), and the output {y
1
,y
2
} is dis-
tributed correctly (correctness).
2PC protocols can be designed against various
types of adversaries, making trade-offs between ef-
ficiency and security. Up to now, two main categories
of adversaries, i.e., semi-honest adversary and mali-
cious adversary, are considered. A semi-honest ad-
versary does not violate the protocols but attempts
to learn more than the predefined output of the func-
tions, where a malicious adversary may deviate arbi-
a
https://orcid.org/0009-0002-7968-2499
b
https://orcid.org/0009-0005-9289-1529
c
https://orcid.org/0000-0002-6920-0663
d
https://orcid.org/0000-0003-0439-3633
trarily from the protocol by taking actions to manip-
ulate the result and messages. Protocols, which are
secure against a semi-honest adversary, offers quite
limited security guarantee, while the ones with mali-
cious security (Katz et al., 2018; Lindell and Pinkas,
2015; Wang et al., 2017; Damg
˚
ard et al., 2012;
Asharov et al., 2015; Keller et al., 2018) are usu-
ally too inefficient for high-throughput applications
in practice (Evans et al., 2018; Hastings et al., 2019).
Covert security (Evans et al., 2017; Damg
˚
ard et al.,
2010; Goyal et al., 2008; Asharov and Orlandi, 2012;
Kolesnikov and Malozemoff, 2015) has targeted the
middle ground between semi-honest and malicious
security. An active adversary may successfully cheat
during the protocol execution, but it can be caught
with a constant probability ε and they are afraid of be-
ing caught. However, it is still an open question that
how ε can be properly determined before the protocol
execution.
In the meantime, applications that have large eco-
nomic or political consequences such as auction (Ben-
Dai, T., Jiang, Y., Li, Y., Müller-Quade, J. and Rupp, A.
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability.
DOI: 10.5220/0013456200003979
In Proceedings of the 22nd International Conference on Security and Cryptography (SECRYPT 2025), pages 49-60
ISBN: 978-989-758-760-3; ISSN: 2184-7711
Copyright © 2025 by Paper published under CC license (CC BY-NC-ND 4.0)
49
hamouda et al., 2019; Cartlidge et al., 2019) and e-
voting system (Adida, 2008; K
¨
usters et al., 2020) all
require that the result must be correctly computed,
and the correctness must be publicly verifiable. Pub-
licly auditable security (Baum et al., 2014) was pro-
posed to address this issue, where an external au-
ditor is introduced to take over all verifications and
provides a publicly verifiable audit result. A small
subtlety is that the auditor is not able to identify the
cheater, even if it finds out that the protocol result is
incorrect (Baum et al., 2014). Thus, the provided de-
terrence may not be sufficient, although it achieves
publicly verifiability. Accountability (K
¨
usters et al.,
2010; Graf et al., 2023) and its variant robust account-
ability (Rivinius et al., 2022) provides more security
guarantee than publicly auditable security. This line
of works guarantees that once the protocol terminates,
parties output either the correct result or a subset of
cheaters
1
(no honest party is falsely blamed), if the
number of the corrupted parties is below a pre-defined
threshold value. Additionally, once any cheater is de-
tected, it can be publicly verified. To achieve such
a strong notion, parties have to compute (verify) non-
interactive zero knowledge proof s (NIZKP) and com-
mitments in the online stage.
1.1 Outsourced Verification and Public
Verifiability Without Bulletin Board
In both maliciously and covert-secure protocols, we
notice that functional computation and misbehavior
detection are always performed simultaneously by the
protocol participants. The public verifiability is of-
ten regarded as a by-product to malicious security.
For applications such as cloud services (Nordholt and
Toft, 2017; Bestavros et al., 2017; Archer et al., 2018;
Alexandru et al., 2018), privacy preserving machine
learning (PPML) tasks (Li et al., 2017; Liu et al.,
2018; So et al., 2021) and blockchain platforms (Ben-
hamouda et al., 2019; Gao et al., 2019; Cordi et al.,
2022; Liu et al., 2021; Zhou et al., 2021), efficiency
issue is always an important concern to be addressed
beyond the security.
In this work, we use the existence of an exter-
nal party and a potential arbitration as a deterrence
to force parties to behave honestly. While the deter-
rence provided in publicly auditable security (Baum
et al., 2014) is not sufficient, and meanwhile the line
of work (K
¨
usters et al., 2010; Graf et al., 2023; Riv-
inius et al., 2022) relies on a heavy online (and offline)
computation and verification, we propose a novel 2PC
1
If a party is corrupted, it may still behave honestly dur-
ing the protocol execution.
A corrupted ,
which has cheated
An honest
: Signature
: Evidence
public verifiable
: Certificate
Figure 1: Workflow of a potential arbitration.
security notion honorific security by introducing an
arbiter P
Ar
, which can identify the cheater. Since
the verification steps are outsourced to P
Ar
, the on-
line computation of a maliciously secure protocol can
thus be significantly accelerated.
Previous work (K
¨
usters et al., 2010; Graf et al.,
2023; Rivinius et al., 2022; Baum et al., 2014) simply
assumes that an external auditor has access to all tran-
scripts published on a bulletin board ideal function-
ality F
Bulletin
. We point out that building protocols
on top of F
Bulletin
is tricky, since the implementation
of F
Bulletin
doubles the communication overhead (the
same message will be transmitted to the auditor once
again). In this paper, we provide a more realistic im-
plementation without relying on F
Bulletin
.
As shown in Fig. 1, after a protocol Π is executed,
P
B
(or both parties) has already collected decisive ev-
idences and signatures. Suppose a corrupted P
A
has
cheated during the protocol execution, if P
B
sends ev-
idence ct
A
and signature σ
A
to P
Ar
, a publicly ver-
ifiable certificate cert
A
will be generated to identify
the cheater P
A
. Similar to accountability, we require
that an honest party cannot be falsely blamed. Note
that such a condition is not trivial in honorific secu-
rity, if P
Ar
is maliciously corrupted
2
. Since P
Ar
can
be activated only after the main protocol is finished,
we emphasize that protocols achieving honorific se-
curity are different from the traditional three-party
computation, where all three parties should stay ac-
tive during the whole protocol execution.
1.2 Our Contribution
In this work, we formalize a new security notion
called honorific security and then present an efficient
2
In works (K
¨
usters et al., 2010; Graf et al., 2023; Riv-
inius et al., 2022; Baum et al., 2014), the auditor is consid-
ered as an external party which can not be corrupted.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
50
GC protocol and an oblivious transfer (OT) protocol
with honorific security in the universal composability
(UC) framework (Canetti, 2001). More specifically,
we achieve the following goals:
New Ideal Functionalities in UC. We require cor-
rectness and privacy when there is at most one ma-
liciously corrupted party (honest majority). This
is achieved by introducing an extra, maliciously
corruptible third party (the arbiter P
Ar
) to the
2PC ideal functionality, which can check and ver-
ify the misbehavior of protocol participants. We
point out, that it is sometimes not sufficient for
P
Ar
to only receive the arbitration result from
the functionality internally, if a participant is re-
quired to use a certain input to the functionality
or use the exact output received from the function-
ality for further computation. Thus, we define an
ideal functionality of two-party computation in two
modes. In Lazy-Arbiter mode (F
2pc
LA
), the function-
ality sends only the arbitration result to P
Ar
. And
in Busy-Arbiter mode (F
2pc
BA
), the functionality also
forwards the input (and the output if needed) of the
arbitrated party to P
Ar
, enabling P
Ar
to check the
input consistency beyond the ideal functionality.
Practical Constructions. We construct protocols
that realize F
2pc
LA
and F
m×OT
LA
based on symmet-
ric key encryption, garbled circuits and digital sig-
natures, and we prove that all of our construc-
tions are secure in the UC framework. Specifical-
lly, we do not rely on the bulletin board function-
ality F
Bulletin
. The general idea behind our con-
structions is that participants are responsible for
exchanging encrypted evidences including input-
independent randomness they have used with each
other. To achieve public verifiability, participants
also have to sign the evidences and the transcript
hash to be compared with. Once a misbehavior
is discovered during an arbitration, P
Ar
can sim-
ply publish the decrypted evidences along with the
signed hash of the transcripts, enabling the public
to identify the cheater.
High Efficiency. We provide a fair comparison of
our protocol against state-of-the-art GC-based pro-
tocols with covert and malicious security in Sec-
tion 8. The intuition behind honorific security is
that both parties should behave honestly even if the
arbiter does not interfere. 2PC protocols are then
accelerated by offloading non-functional computa-
tions to a potential arbitration section, which can
be executed separately. To highlight the power of
our notion and practical constructions, we always
let the arbitration take place and count its cost in
the experiments. We show, that even with the arbi-
tration cost added, our protocols are almost as effi-
cient as those with semi-honest security and up to
2.37 times and 13.30 times as fast as protocols with
covert and malicious security, respectively.
2 RELATED WORK
Security Notions Beyond Semi-Honest. The for-
mal definition of malicious security can be found in
Goldreich’s seminal two volume classics (Goldreich,
2009). Protocols with malicious security (Katz et al.,
2018; Lindell and Pinkas, 2015; Wang et al., 2017;
Damg
˚
ard et al., 2012; Asharov et al., 2015; Keller
et al., 2018; Dittmer et al., 2022; Hazay et al., 2020;
Cui et al., 2023; Yang et al., 2020) ensure that even if
an adversary A deviates from protocol definition ar-
bitrarily, A cannot learn anything about other parties’
inputs, except that A may cause other parties to abort
(security with abort) (Lindell, 2017). Covert secu-
rity was first introduced by Aumann and Lindell (Au-
mann and Lindell, 2010) in 2007 against rational ad-
versaries, targeting the middle ground between semi-
honest and malicious security. In a covert-secure pro-
tocol, cheating can succeed with a probability 1 ε,
and it will be detected by other parties with the re-
maining probability ε, which is also called the deter-
rence factor. Follow-up works (Asharov and Orlandi,
2012; Damg
˚
ard et al., 2010; Goyal et al., 2008; Lin-
dell, 2016) have confirmed that protocols with rea-
sonable ε have a clear advantage in efficiency over
ones with malicious security. Among them, authors
(Asharov and Orlandi, 2012) highlight another crit-
ical feature that a covert-secure MPC protocol may
need: the public verifiability (PVC). However, exist-
ing instantiations (Asharov and Orlandi, 2012; Au-
mann and Lindell, 2010; Damg
˚
ard et al., 2010; Evans
et al., 2017; Goyal et al., 2008; Hong et al., 2019;
Kolesnikov and Malozemoff, 2015; Baum et al.,
2020; Scholl et al., 2021) are still much heavier than
the semi-honest ones in terms of computational re-
sources and bandwidth consumption.
Distinction from Other Notions. Bringing in an ar-
biter has already made our model distinct from pure
two party computation, where each party takes care
of its own privacy all by itself after setting up. Com-
pared to covert security (Aumann and Lindell, 2010),
we require the probability of catching cheaters to be
overwhelming instead of being a constant probabil-
ity. A similar approach (Baum et al., 2014) introduces
an external auditor beyond protocol participants and
achieves publicly auditable security. However, the
auditor is not able to identify the cheater. The no-
tion identifiable abort (Ishai et al., 2014; Baum et al.,
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
51
2016; Nie et al., 2023) allows all honest parties to
identify at least one corrupted party which causes the
protocol to abort during execution. Accountability
(K
¨
usters et al., 2010; Graf et al., 2023) and its variant
robust accountability (Rivinius et al., 2022) achieves
both output delivery guarantee and public verifiability
by performing NIZKPs and commitments in the on-
line stage. As mentioned in the previous section, all
notions require either a broadcast channel, or a bul-
letin board functionality (or both) to achieve an addi-
tional security guarantee. The most important aspect
regarding these notions is that the functional compu-
tation and the verification computation are performed
simultaneously by protocol participants themselves.
Role of an Extra Party. The idea of delegating
part of the MPC tasks to an independent party roots
in Beaver’s Commodity-based Cryptography in 1997
(Beaver, 1997). Independently generated correlated
randomness, such as Beaver’s multiplicative triples
(Beaver, 1997), has been extremely helpful in per-
formance improvement of sharing-based MPC frame-
works (Keller et al., 2018; K
¨
usters et al., 2010; Graf
et al., 2023; Baum et al., 2014). Another purpose
of introducing an external party is for security. MPC
constructions using hardware tokens (Badrinarayanan
et al., 2019; Goyal et al., 2010) also imply an indepen-
dent party for attestation and sealing, which can per-
suade both Alice and Bob to believe the correctness
and non-malleability of logic in the hardware (Anati
et al., 2013). Prior works (Byali et al., 2019; Koti
et al., 2022) also consider including extra parties as
verification nodes instead of participating in the func-
tional part (in the honest majority setting).
3 PRELIMINARIES
We use “a party uses randomness derived from seed
as a convention for the action that a party uses seed as
the key of a pseudorandom function (PRF) to obtain a
sufficiently long series of pseudorandomness.
Definition 3.1 (Garbling Scheme). A circuit gar-
bling scheme GCS = (Gb, En, Ev, De) consists of the
following algorithms.
Gb(1
κ
,C ) denotes the garbling algorithm. It takes
the security parameter 1
κ
and the circuit C as input.
It returns a garbled circuit GC, encoding informa-
tion e, and decoding information d.
En(w,e) denotes the encoding algorithm. It takes
the input w and encoding information e as input. It
returns the garbled input {W
i,b
}.
Ev(GC,W ) denotes the evaluation algorithm. It
takes the garbled circuit GC and garbled input
{W
i,b
} as input. It returns a garbled output {O
i
}.
De(d,{O
i
}) denotes the decoding algorithm. It
takes the decoding information d and garbled out-
put {O
i
} as input. It returns the output {o}.
Definition 3.2 (Correctness (Bellare et al., 2012)).
A garbling scheme GCS = (Gb,En, Ev,De) is correct,
if for all functions C and input w:
Pr
De(d, Ev(GC,En(e,w))) = C (w) :
(GC,e,d) Gb(1
κ
,C )
= 1
Definition 3.3 (Simulatable Privacy (Bellare et al.,
2012; Lu et al., 2021)). A garbling scheme GCS =
(Gb,En,Ev,De) is simulatable private, if for all func-
tions C and input w, there exists a probabilistic poly-
nomial time (PPT) simulator Sim such that for all
PPT adversary A:
Pr
b = b
:
(GC
0
,e
0
,d
0
) Gb(1
κ
,C );
W
0
En(e,w);
(GC
1
,W
1
,d
1
) Sim(1
κ
,C (w),Φ(C ));
b {0,1};b
A(GC
b
,W
b
,d
b
);
negl(κ),
where Φ denotes the side-information function.
We then denote a signature scheme as SIG =
(SIG.Gen, SIG.Sign, SIG.Vfy) and a symmetric en-
cryption scheme as Π = (Π.KGen, Π.ENC,Π.DEC).
Furthermore, we let CRS denote the common refer-
ence string.
4 FUNCTIONALITIES WITH
HONORIFIC SECURITY
In this section, we formally define the ideal function-
ality with honorific security, see Fig. 2 and the Defi-
nition 4.1 for more details.
4.1 Functionality F
2p c
LA
and F
2p c
BA
Let P
A
, P
B
and P
Ar
denote the participating parties
P = {P
A
,P
B
,P
Ar
}, P
c
{P
A
,P
B
,P
Ar
} denote the
corrupted parties controlled by an adversary.
Cheat Flag. Besides basic queries defined as in UC
(Canetti, 2001), the ideal functionality now has an ad-
ditional flag cheatParty internally, recording cheat-
ing parties (or none) during the execution. Another
internal state arbiterReady denotes whether the flag
cheatParty has been set properly.
Cheat Query. We then extend the ideal functionality
by adding a new instruction, such that a simulator S
can send to the ideal functionality. Similar to the ideal
functionalities with covert security, S is able to send
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
52
Functionality F
2pc
LA
interacts with players
P := {P
A
,P
B
,P
Ar
} and the adversary S. It
has three internal states: a set of corrupted
parties P
c
{P
A
,P
B
,P
Ar
}, a set of cheated
parties cheatParty {P
A
,P
B
}, and a state
arbiterReady {true,false}.
Initially, P
c
=
/
0, cheatParty =
/
0, arbiterReady =
false.
Corrupt: Upon receiving (corrupt,P
i
,sid) from
the adversary S:
If P
i
P and P
c
=
/
0, set P
c
:= {P
i
}. Send
(corrupt success,P
i
,sid) to S.
Otherwise, send (corrupt failed, P
i
,sid) to S.
Compute: Upon receiving (compute, x
A
,P
A
,sid)
from party P
A
and (compute,x
B
,P
B
,sid) from
party P
B
:
Compute f (x
A
,x
B
) and send it to P
B
.
Set arbiterReady = true.
Cheat: Upon receiving (cheat,P
i
,sid) from party
P
i
, where P
i
{P
A
,P
B
}:
If P
i
P
c
, send a message
(cheat success,x
j
,sid) to S , wait to re-
ceive o
j
from P
i
and send o
j
to P
j
.
Set cheatParty = P
i
cheatParty and
arbiterReady = true.
Otherwise, send a message
(cheat failed,P
i
,sid) to S.
Arbitrate: Upon receiving ((arbitrate,P
j
),P
i
,sid)
from party P
i
intended to arbitrate P
j
:
If arbiterReady = false, ignore this query.
Lazy-Arbiter mode: If P
j
cheatParty, send
(cheated, P
j
,sid) to P
Ar
and halt. Otherwise,
send (honest,P
j
,sid) to P
Ar
.
Busy-Arbiter mode: If P
j
cheatParty, send
(cheated, P
j
,sid) to P
Ar
and halt. Otherwise,
send info {x
j
,o
j
} to P
Ar
.
Figure 2: Two Party Functionality F
2pc
LA
and F
2pc
BA
.
a cheat query to the ideal functionality, and this cheat
decision ”must be made before the adversary learns
anything” (Aumann and Lindell, 2010). Remark that
P
Ar
can only cheat by framing a party up with an in-
correct arbitration result, since P
Ar
will be activated
only for the arbitration. Although such a cheat will be
prevented by our protocol construction, we still pro-
vide the cheat option to P
Ar
for further extensions.
Arbitrate Query. Generally, we allow any party to
send an arbitrate query ((arbitrate, P
j
), P
i
,sid) to the
ideal functionality, including P
A
, P
B
and P
Ar
. From
practical point of view, allowing P
Ar
to request ar-
bitration requires P
Ar
to be aware of every protocol
execution, which might be unrealistic. In this paper,
we focus on the case that only P
A
and P
B
send this
query to the ideal functionality.
Arbiter Mode. On the other hand, sometimes P
Ar
may also have to arbitrate whether a protocol partici-
pant has handed in the correct input to the ideal func-
tionality, or used the correct output for the further ex-
ecution received from an ideal functionality, even if it
behaved honestly during the protocol execution. We
thus split the ideal functionality into two modes:
Lazy-Arbiter mode: P
Ar
only receives the arbitra-
tion result from the ideal functionality, whether the
arbitrated party (say P
j
in the following context)
cheated or not during the protocol execution.
Busy-Arbiter mode: If P
j
has already cheated
during the protocol execution, P
Ar
receives
(cheated,P
j
,sid) from the ideal functionality. Oth-
erwise, instead of receiving the notification, P
Ar
ob-
tains info {x
j
,o
j
}. We let F
BA,P
j
denote such a
functionality, where P
j
is supposed to be arbitrated.
We point out that the input of a sub-protocol does
not have to be the private input of any party, and P
Ar
only receives such information when it has to be arbi-
trated.
Definition 4.1 (Honorific Security.). Let F
2pc
be a
two-party functionality and F
2pc
LA
, F
2pc
BA
be the cor-
responding functionality with honorific security in
Lazy-Arbiter Mode and Busy-Arbiter Mode. We say
that a protocol Π UC-realizes F
2pc
with honorific se-
curity, if Π UC-realizes F
2pc
LA
, or F
2pc
BA
.
4.2 Functionality F
m×OT
BA,S
and F
m×ROT
BA,S
The OT ideal functionality m × OT and the random
OT ideal functionality m × ROT with honorific secu-
rity are constructed in Fig. 3. When parties perform a
GC protocol Π
GC
calling F
m×OT
BA,S
as a sub-protocol,
this is exactly the case as mentioned above where
P
A
s input to F
m×OT
BA,S
is just randomness generated
during the execution of Π
GC
. Similarly, if F
m×ROT
BA,S
is
chosen as a sub-protocol, the output delivered to P
A
does not reveal any party’s real input. Importantly,
P
B
s input to F
m×OT
BA,S
and F
m×ROT
BA,S
is the real pri-
vate input regarding Π
GC
, and does not need to be
arbitrated beyond the scope of an OT protocol. For
this reason, if P
B
is arbitrated, the ideal functionality
should only deliver the arbitration result to P
Ar
.
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
53
Functionality F
m×OT
BA,S
interacts with players P :=
{S,R,P
Ar
} and the adversary S . It has three
internal states: a set of corrupted parties P
c
{S,R,P
Ar
}, a set of cheated parties cheatParty
{S,R}, and a state arbiterReady {true,false}.
Initially, P
c
=
/
0, cheatParty =
/
0, arbiterReady =
false.
Corrupt: same as F
2pc
LA
and F
2pc
BA
.
Compute (F
m×OT
BA,S
, F
m×OT
LA
and F
m×OT
BA,R
): Upon
receiving ({x
0
,x
1
}
m
, S,sid) from S and (x
B
,R,sid)
from R, send {x
x
B
[i]
}
m
to R, set arbiterReady =
true.
Compute (F
m×ROT
BA,S
): Upon receiving (x
B
,R,sid)
from R:
If S / P
c
, sample random {x
0
,x
1
}
m
, send
{x
0
,x
1
}
m
to S and {x
x
B
[i]
}
m
to R.
Otherwise, wait for S to input {x
0
,x
1
}
m
, then
output as above using these values.
Set arbiterReady = true.
Cheat: same as F
2pc
LA
and F
2pc
BA
.
If arbiterReady = false, ignore the following
queries.
Arbitrate(F
m×OT
BA,S
and F
m×ROT
BA,S
):
Upon receiving ((arbitrate,S),R,sid) from
party R intended to arbitrate S: If S
cheatParty send (cheated, S,sid) to P
Ar
and
halt. Otherwise, send {x
0
,x
1
}
m
to P
Ar
.
Upon receiving ((arbitrate,R),S,sid) from
party S intended to arbitrate R: If R
cheatParty send (cheated, R,sid) to P
Ar
and
halt. Otherwise, send (honest, R, sid) to P
Ar
.
Arbitrate(F
m×OT
BA,R
):
Upon receiving ((arbitrate,S),R,sid) from
party R intended to arbitrate S: If S
cheatParty send (cheated,S,sid) to P
Ar
and
halt. Otherwise, send (honest, S, sid) to P
Ar
.
Upon receiving ((arbitrate,R),S,sid) from
party S intended to arbitrate R: If R
cheatParty send (cheated, R,sid) to P
Ar
and
halt. Otherwise, send (x
B
,{x
x
B
[i]
}) to P
Ar
.
Arbitrate(F
m×OT
LA
):
Upon receiving ((arbitrate, P
j
),P
i
,sid) from
party P
i
intended to arbitrate P
j
: If P
j
cheatParty send (cheated, P
j
,sid) to P
Ar
and
halt. Otherwise, send (honest, P
j
,sid) to P
Ar
.
Figure 3: OT Functionality F
m×OT
BA,S
, F
m×ROT
BA,S
, F
m×OT
BA,R
and F
m×OT
LA
.
5 THE MAIN PROTOCOL Π
GC
In this section, we show how to realize F
2pc
LA
based on
GC. We introduce a high-level overview of the proto-
col Π
GC
, and provide a formal definition in Fig. 4.
Using a signature scheme SIG, all three parties P
A
,
P
B
and P
Ar
run SIG.Gen to obtain their public-private
key pairs. We assume that all parties know the pub-
lic keys of each other as the common reference string
(CRS) before running the protocol. In the CRS model,
this will allow the simulator S to simulate the key pair
for the signature scheme.
In the key setup phase, P
Ar
calls Π.KGen of a
symmetric key encryption scheme to generate a sym-
metric key key
Ar
. P
Ar
then computes a commitment h
on key
Ar
, sends key
Ar
, h and decom to P
A
along with
a signature σ
Ar
on h. Afterward, P
B
receives h and
σ
Ar
from P
A
. This setup is mainly considered for P
A
to prepare its encrypted evidences in the main part of
the protocol. Obviously, P
B
can obtain another sym-
metric key with the same setup steps simultaneously
if needed, having P
A
holding the signed commitment.
To achieve honorific security, both party P
A
and
P
B
have to send evidence and signature to each other.
In GC-based 2PC, if P
B
wants to check whether P
A
has cheated during the protocol execution, P
B
can
send the commitment h, the evidence ct
A,GC
, the sig-
nature σ
A,GC
, and the corresponding hash value H on
GC to P
Ar
, which can open the evidence and check all
P
A
s behaviors during the protocol execution. Recall
that P
Ar
is allowed to hold seed
A
since it does not leak
any information about the private input of both P
A
and P
B
. Meanwhile, the wire labels of both parties
are required to be kept secret from P
Ar
(ensured by
the honest majority setting). Note that P
B
has already
proved whether the signed commitment h is indeed
provided by P
Ar
by verifying the signature of P
Ar
in
the key setup stage. It ensures that P
A
must encrypts
the correct seed
A
with the symmetric key key
Ar
pro-
vided by P
Ar
(since P
Ar
will use the correct key
Ar
to
decrypt), otherwise the arbitration will fail except for
negligible probability.
The above idea can prevent any P
A
from cheating
during the GC generation phase, since P
B
can verify
P
A
s behavior at any time by sending this evidence to
P
Ar
. But during the OT protocol, where P
B
learns the
receiver’s input wire labels, P
A
can still perform the
selective failure attack by providing a pair of true and
false labels, and then observing if P
B
aborts to obtain
P
B
s one-bit input. In our model, if P
A
performs such
an attack and P
B
prosecutes, P
Ar
should be able to de-
tect such dishonest behavior. However, we notice that
an original OT functionality (or even F
m×OT
LA
) does
not provide P
Ar
with such an ability or other mate-
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
54
Private inputs: P
A
has input x
A
{0,1}
n
1
and a key pair {pk
A
,sk
A
} for the signature scheme. P
B
has input
x
B
{0, 1}
n
2
and a key pair {pk
B
,sk
B
} for the signature scheme. P
Ar
has a key pair {pk
Ar
,sk
Ar
} for the
signature scheme.
Public inputs: P
A
and P
B
agree on a circuit C and a parameter κ. All three parties know the public key pk
i
of each other and a session ID sid.
CRS: (pk
A
,pk
B
,pk
Ar
).
Key Setup:
1. P
Ar
generates key
Ar
and computes a commitment h Com(key
Ar
), then signs it with a signature σ
Ar
SIG.Sign(sk
Ar
,h).
2. P
Ar
sends (key
Ar
,h,decom,σ
Ar
) to P
A
, which verifies whether h and σ
Ar
are both valid, aborts with output
if not.
3. P
A
sends (h,σ
Ar
) to P
B
, which verifies whether σ
Ar
is valid, and aborts with output if not.
Protocol:
1. P
A
garbles the circuit C using randomness derived from seed
A
. The garbled circuit is denoted as GC,
as well as P
A
s input wire labels {A
i,b
}
i[n
1
],b∈{0,1}
, P
B
s input wire labels {B
i,b
}
i[n
2
],b∈{0,1}
, and output
wire labels {O
i,b
}
i[n
3
],b∈{0,1}
. P
A
then computes a decoding table table {Label
0
i
,Label
1
i
}
i[n
3
]
, where
Label
0
i
H(O
i,0
) and Label
1
i
H(O
i,1
).
2. P
A
and P
B
call F
m×OT
BA,S
, where P
A
uses {B
i,b
}
i[n
2
],b∈{0,1}
as input and P
B
uses x
B
as input.
3. P
A
computes an evidence ct
A,GC
Π.ENC(key
Ar
,seed
A
). Then P
A
computes a hash value H
H(C ||GC||table||sid) and a signature of this message σ
A,GC
SIG.Sign(sk
A
,h||H ||ct
A,GC
||sid). P
A
then
sends (GC, table, ct
A,GC
, σ
A,GC
, sid) to P
B
.
4. P
B
computes H and checks whether σ
A,GC
is a valid signature for (h, H , ct
A,GC
, sid), and aborts with
output if not.
5. P
A
sends {A
i,x
A
[i]
}
i[n
1
]
to P
B
.
6. P
B
evaluates GC using {A
i,x
A
[i]
}
i[n
1
]
and {B
i,x
B
[i]
}
i[n
2
]
, and obtains {O
i,o
B
[i]
}
i[n
3
]
. Then, P
B
computes
{H(O
i,o
B
[i]
)}
i[n
3
]
. If any H(O
i,o
B
[i]
) / {Label
0
i
,Label
1
i
}, P
B
aborts with output , otherwise P
B
outputs
o
B
.
* Arbitrate:
1. P
B
sends an arbitrate query (C ,H ,ct
A,GC
,σ
A,GC
,sid) to P
Ar
, which checks whether σ
A,GC
is valid, and
aborts with output if not.
2. P
B
sends an arbitrate query ((arbitrate,P
A
),P
B
,sid) to F
m×OT
BA,S
. If P
Ar
receives (cheated,P
A
,sid),
then the arbitration ends here. If P
Ar
receives {B
i,b
}, the arbitration proceeds.
3. P
Ar
retrieves seed
A
Π.DEC(key
Ar
,ct
A,GC
), computes {
ˆ
B
i,b
},
ˆ
GC and
ˆ
table using the randomness
derived from seed
A
.
4. P
Ar
computes
ˆ
H H(C ||
ˆ
GC||
ˆ
table||sid). If
ˆ
H = H , and {
ˆ
B
i, j,b
} = {B
i,b
}, then P
Ar
locally outputs
(honest,P
A
,sid). Otherwise, P
Ar
outputs (cheated,P
A
,sid).
Figure 4: Full description of GC based Π
GC
that UC-realizes F
2pc
LA
.
rials to perform the check. The reason is that when
parties call a functionality as a sub-protocol, the in-
put requirement is ”out of scope” of this functionality
description. As an example, we suppose that parties
execute F
m×OT
LA
as the sub-protocol within Π
GC
. The
OT sender P
A
does not cheat by sending the cheat
option to F
m×OT
LA
, but performs the selective failure
attack described as above. We observe that P
Ar
will
receive the notification of P
A
s honesty from F
m×OT
LA
,
since P
A
has not cheated internally during the OT pro-
tocol execution, although P
A
does not input the cor-
rect labels to F
m×OT
LA
as expected. Interestingly, this
cheat can be captured by the simulator during simu-
lation, since the simulator can decrypt seed
A
and thus
be aware that the labels sent from P
A
to the simulated
OT ideal functionality are incorrect. In reality, P
Ar
is not given such an ability yet. To solve this prob-
lem, we require an improved OT functionality shown
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
55
in Fig. 3 to be used within our protocol:
F
m×OT
BA,S
outputs directly (cheated, P
A
,sid) to P
Ar
if
P
A
cheated internally in F
m×OT
BA,S
already, or it out-
puts P
A
s input {B
i,b
} after receiving the arbitrate
query from P
B
. This allows P
Ar
to perform the con-
sistency check, by comparing the real input {B
i,b
}
with the claimed input computed by the seed
A
.
F
m×ROT
BA,S
works similarly compared to F
m×OT
BA,S
, it
just outputs P
A
s output {B
i,b
} to P
Ar
instead of
the input. In this case, only {A
i,b
} are generated
by P
A
using randomness derived from the seed
A
.
As for P
B
s input wire labels, P
A
is supposed to
use the messages obtained from F
m×ROT
BA,S
. To check
whether P
A
has cheated, P
Ar
only has to compare
the hashes of two circuits, where one of them is
computed by P
Ar
itself using {A
i,b
} generated by
seed
A
and {B
i,b
} received from F
m×ROT
BA,S
, and the
another one is sent from P
B
along with the signa-
ture of P
A
.
Finally, P
A
sends the garbler’s input {A
i,x
A
[i]
} to
P
B
, allowing P
B
to compute the output o
B
. Although
P
A
can send some invalid {A
i,x
A
[i]
} and cause P
B
to
abort. But as already mentioned in (Wang et al., 2017;
Lindell, 2017), any such abort occurs independently
of P
B
s private input, and thus does not help P
A
to
learn a single bit of P
B
s input.
Specifically, the above protocol ensures that no
one is framed up. Again, we consider the case that
P
A
is arbitrated by P
Ar
. In order to successfully frame
P
A
up, either P
B
or P
Ar
has to forge an incorrect evi-
dence (or transcript hash), which can be successfully
verified with P
A
s signature. If SIG is existentially
unforgeable under chosen-message attacks (see Sec-
tion 7 for more details), this can be achieved only with
negligible probability.
6 OBLIVIOUS TRANSFER
PROTOCOL Π
OTE
BA,S
In this section, we show how to convert an OT
Extension protocol to a protocol which UC-realizes
F
m×OT
BA,S
. We take the improved OT Extension proto-
col (Keller et al., 2015) (which is now a simple instan-
tiation of SoftSpokenOT (Roy, 2022)) as an example,
and we show that the modified protocol Π
OTE
BA,S
im-
plements F
m×OT
BA,S
in the F
l×OT
BA,R
,F
CRS
-hybrid model.
The protocol Π
OTE
BA,S
is provided in the full version.
Since the original protocol (Keller et al., 2015) is
secure against both malicious P
A
and P
B
, we focus
on how to enable P
Ar
to receive P
A
s input as an ad-
ditional output (and nothing else), if P
B
sends the ar-
bitrate query to P
Ar
(and P
A
is honest). Recall that
a base OT protocol is executed as a sub-protocol in
(Keller et al., 2015), where P
A
uses s as its input and
receives {k
s
i
} as output. Since both s and {k
s
i
} are
just randomness generated by P
A
and P
B
, allowing
P
Ar
to hold this information is harmless. Note that
holding both s and {k
s
i
} along with the exact mes-
sages P
A
has received from P
B
enables P
Ar
to recon-
struct P
A
s view, including P
A
s real input to the OT
Extension protocol. Again, we are facing the same
problem as in Π
GC
, since a traditional OT ideal func-
tionality will not forward s and {k
s
i
} to P
Ar
. Thus, P
A
and P
B
have to call F
l×OT
BA,R
shown in Fig. 3 (F
m×OT
BA,R
with m = l), enabling P
Ar
to receive such informa-
tion. If P
B
(the OT sender in F
l×OT
BA,R
) sends the ar-
bitrate query to F
l×OT
BA,R
, it then forwards P
A
s input s
and output {k
s
i
} to P
Ar
(if P
A
is honest). In addition,
we let P
A
sign the message transcripts and send the
signature σ
A,OT
to P
B
. This ensures that P
B
cannot
frame P
A
up by sending incorrect message transcripts
to P
Ar
.
F
l×OT
BA,R
can be easily implemented by running
any modified maliciously secure OT protocol l times,
where P
A
(the OT receiver) additionally sends its in-
put encrypted by the symmetric key key
Ar
and a sig-
nature (on the encrypted input and the message tran-
scripts) to P
B
(the OT sender). We notice that such
an implementation requires an additional communi-
cation round for the last message sent from P
A
, if
F
l×OT
BA,R
is separately executed. However, if F
l×OT
BA,R
is
called as a sub-protocol (as in Π
OTE
BA,S
), the additional
communication round is omitted.
7 SECURITY ANALYSIS
Due to space limitation, we only provide the Public
Verifiability definition and theorems for the proposed
protocols in this section. The simulator constructions
and detailed proofs are provided in the full version.
Let vrfy() denote the arbitration algorithm that
P
Ar
performs. Let cert
Ar,j
denote a certificate, which
consists of the transcript P
Ar
receives from P
i
to ar-
bitrate P
j
along with (key
Ar
,decom). We first define
Public Verifiability.
Definition 7.1 (Public Verifiability.). If any pro-
tocol participant P
j
cheats and an honest partici-
pant P
i
sends the arbitrate query, P
Ar
always out-
puts an arbitration result (cheated, P
j
,sid) with a
cert
Ar,j
, except with negligible probability. If P
Ar
sends cert
Ar,j
to any party P
k
, then P
k
always outputs
(cheated,P
j
,sid) by executing vrfy(cert
Ar,j
), except
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
56
with negligible probability.
Theorem 7.1. Assume GCS = (Gb, En, Ev,De) is
a simulatably private and correct garbling scheme,
H() is a correlation-robust cryptographic hash func-
tion, SIG is existentially unforgeable under chosen-
message attacks, and SKE Π is secure under chosen-
plaintext attacks. Protocol Π
GC
described in Fig.4
UC-realizes F
2pc
LA
described in Fig.2 with public ver-
ifiablity in the F
m×OT
BA,S
, F
CRS
-hybrid model in the
presence of a malicious adversary who can corrupt
either P
A
, P
B
or P
Ar
, with static corruption.
Theorem 7.2. Assume H() is a correlation-robust
cryptographic hash function, G is a pseudo ran-
dom generator, SIG is existentially unforgeable under
chosen-message attacks. Protocol Π
OTE
BA,S
UC-realizes
F
m×OT
BA,S
described in Fig. 3 in the F
l×OT
BA,R
,F
CRS
-
hybrid model in the presence of a malicious adversary
who can corrupt either P
A
, P
B
or P
Ar
, with static cor-
ruption.
8 EVALUATION
8.1 Evalution Setup
Testbed Environment. All experiments are executed
in a single server with separate processes for P
A
and
P
B
, with an additional process for P
Ar
. The server
runs Ubuntu Server 22.04 LTS and has two Intel Xeon
CPUs (8360Y @ 2.40GHz). All programs run with a
single thread. In the LAN setting, the network band-
width is 1 Gbps and the average latency is 0.2 ms. In
the WAN setting, the network bandwidth is 100 Mbps
and the average latency is 40 ms. Both are simulated
with tc (Hemminger et al., 2005). We have never met
any issue with the memory usage.
Baseline. To validate the efficiency of our pro-
tocols, we implement them in the open source
framework emp-toolkit (Wang et al., 2022), and
compare them against baseline implementations in-
cluded in emp-toolkit. More specifically, emp-sh2pc
(Semi-honest) at commit 61589f5, emp-pvc (PVC)
at commit 7c75a85 and a modified version of
emp-ag2pc (Malicious) at commit eddb6bf.
Experiment Parameters. We set the security param-
eter κ = 128 in our implementation. We implement
our protocol Π
GC
and Π
OT
with the state-of-the-art
techniques for garbling (Kolesnikov and Schneider,
2008) (Zahur et al., 2015). We use SHA-256 for
the hash function provided by openssl instead of Free
Hash mentioned in PVC (Hong et al., 2019), as Guo
et al. (Guo et al., 2020) pointed out lately that this
instantiation of the hash function was not collision re-
sistant. As for signature scheme, we choose the stan-
dard ECDSA implementation provided by openssl.
Benchmark. To benchmark the running time, we per-
form each protocol for 10 times. Each time, we use
the longest time of all parties as the running time of
that run. The average running time among the exper-
iments is presented in Table 2 and Table 3. We also
count the total communication volume of P
B
, which
includes both inbound and outbound traffic. In our
case, it sums up P
B
s communication with P
A
, as well
as with P
Ar
. The statistics is shown in Table 5.
Experiment Circuits. The circuits used for evalua-
tion are listed in Table 1, where n
1
denotes the num-
ber of P
A
s input wires, n
2
the number of P
B
s in-
put wires, n
3
the number of output wires, and |C | the
number of AND gates. We let Hem. dist. denote the
Hamming Distance circuit.
Table 1: Circuits for evaluation. Overall n
2
OTs are re-
quired for each circuit.
Circuit n
1
n
2
n
3
C
AES-128 128 128 128 6,800
SHA-256 512 256 256 22,573
SHA-512 1,024 512 512 57,947
Mult. 2,048 2,048 2,048 4,192K
Ham. dist. 1,048K 1,048K 22 10,223K
Table 2: Comparison of the running time (in milliseconds)
of all protocols in LAN setting.
Circuit Se.-ho. This paper PVC Malicious
AES-128 23.73 33.50 51.32 87.78
SHA-256 28.26 36.07 67.68 202.95
SHA-512 37.77 44.90 68.18 429.36
Mult. 1,513 1,874 2,078 18,089
Ham. 1,354 2349 2,550 10,682
Table 3: Comparison of the running time (in milliseconds)
of all protocols in WAN setting.
Circuit Se.-ho. This paper PVC Malicious
AES-128 309.17 397.87 942.93 1,257
SHA-256 347.45 429.35 1,006 2,097
SHA-512 447.26 525.81 1,098 3,849
Mult. 11,429 11,934 12,452 158,778
Ham. 10,169 14,804 20,573 89,464
8.2 Comparisons
Compared to the Semi-Honest Protocol. In Table 2
and Table 3, we show the running time of our proto-
col for each circuit compared with that against semi-
honest adversaries in LAN and WAN settings. For
the semi-honest protocol, the results contain the run-
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
57
Table 4: Relative speedup between our protocol and other
protocols in LAN setting and WAN setting.
Circuit
PVC Malicious
LAN WAN LAN WAN
AES-128 1.53× 2.37× 2.62× 3.16×
SHA-256 1.88× 2.34× 5.63× 4.89×
SHA-512 1.52× 2.09× 9.56× 7.32×
Mult. 1.11× 1.04× 9.65× 13.30×
Ham. 1.09× 1.39× 4.55× 6.04×
Table 5: Communication complexity in MB.
Circuit Se.-ho. This paper PVC Malicious
AES-128 0.21 0.24 0.75 0.27
SHA-256 0.71 0.75 1.27 0.93
SHA-512 1.80 1.85 2.40 2.39
Mult. 128.04 128.16 128.67 170.03
Ham. 112.01 160.04 176.54 129.00
ning time of a base OT protocol and a passively secure
OT extension protocol (Asharov et al., 2013). For
our protocol, we follow the constructions described
in Section 5 with an OT extension protocol Π
OTE
BA,S
de-
scribed in Section 6. As shown in Table 4, the slow-
down factor of our protocol comparing to the semi-
honest protocol never exceeds 2.
Compared to the PVC Protocol. Then we compare
the running time of our protocol to the PVC protocol
(Hong et al., 2019) with a deterrence factor ε = 1/2.
We show in both Table 2 and Table 3 that achiev-
ing honorific security costs much less than achiev-
ing covert security. Our protocol performs up to 1.88
times faster in LAN setting and 2.37 times faster in
WAN setting than the PVC protocol. To run a PVC
protocol, both garbler and evaluator have to jointly
perform 2 λ 1 times garbling scheme and λ n
2
OTs (recall that ε = 1
1
λ
), while only two times gar-
bling scheme and n
2
OTs are needed in our protocol.
We point out that if we choose to execute the PVC
protocol with some lager ε, the boost factor brought
by honorific security will be more effective.
Compared to the Malicious Protocol. In Table 2 and
Table 3 we list the performance of running a state-of-
the-art malicious protocol (Katz et al., 2018).
3
As
shown in Table 4, our protocol beats the malicious
protocol with at least a 2.85 times acceleration in
LAN setting and a 3.56 times acceleration in WAN
setting. Remark that if we omit the computation and
communication overhead of an arbitration by letting
the parties only hold the received evidences as a de-
terrence, we can achieve a even better performance.
Communication Overhead. The communication
overhead of this paper compared against the PVC pro-
3
The work (Cui et al., 2023) is proposed without imple-
mentation
tocol (Hong et al., 2019) and the malicious protocol
(Katz et al., 2018) is documented in Table 5. As we
expected, the communication volume of our protocol
is much closer to the semi-honest protocol. We no-
tice that the required communication overhead of the
malicious protocol is actually less than our for com-
puting the hamming distance circuit. This is caused
by executing Π
OTE
BA,S
, when the input wires explode.
However, our protocol still dominates the malicious
protocol in both LAN/WAN settings as shown in Ta-
ble 4, since the required communication rounds for an
arbitration in Π
OTE
BA,S
is optimized.
9 CONCLUSION
In this paper, we propose a new security notion hon-
orific security in the UC framework. By constructing
an efficient OT protocol and an efficient GC-based
2PC protocol with provable security, we show that
this notion provides sufficient security guarantee and
implies high efficiency. For future work, we should
investigate more complicated applications and proto-
cols, where multiple parties are corrupted.
REFERENCES
Adida, B. (2008). Helios: Web-based open-audit voting. In
USENIX security symposium, volume 17, pages 335–
348.
Alexandru, A. B., Morari, M., and Pappas, G. J. (2018).
Cloud-based mpc with encrypted data. In 2018 IEEE
conference on decision and control (CDC), pages
5014–5019. IEEE.
Anati, I., Gueron, S., Johnson, S., and Scarlata, V. (2013).
Innovative technology for cpu based attestation and
sealing. In Proceedings of the 2nd international work-
shop on hardware and architectural support for secu-
rity and privacy, volume 13, page 7. Citeseer.
Archer, D. W., Bogdanov, D., Lindell, Y., Kamm, L.,
Nielsen, K., Pagter, J. I., Smart, N. P., and Wright,
R. N. (2018). From keys to databases—real-world
applications of secure multi-party computation. The
Computer Journal, 61(12):1749–1771.
Asharov, G., Lindell, Y., Schneider, T., and Zohner, M.
(2013). More efficient oblivious transfer and exten-
sions for faster secure computation. In Proceedings
of the 2013 ACM SIGSAC conference on Computer &
communications security, pages 535–548.
Asharov, G., Lindell, Y., Schneider, T., and Zohner, M.
(2015). More efficient oblivious transfer extensions
with security for malicious adversaries. In Annual
International Conference on the Theory and Appli-
cations of Cryptographic Techniques, pages 673–701.
Springer.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
58
Asharov, G. and Orlandi, C. (2012). Calling out cheaters:
Covert security with public verifiability. In Wang, X.
and Sako, K., editors, Advances in Cryptology ASI-
ACRYPT 2012, pages 681–698, Berlin, Heidelberg.
Springer Berlin Heidelberg.
Aumann, Y. and Lindell, Y. (2010). Security against covert
adversaries: Efficient protocols for realistic adver-
saries. volume 23, pages 281–343. Springer.
Badrinarayanan, S., Jain, A., Ostrovsky, R., and Visconti,
I. (2019). Uc-secure multiparty computation from
one-way functions using stateless tokens. In Interna-
tional Conference on the Theory and Application of
Cryptology and Information Security, pages 577–605.
Springer.
Baum, C., Damg
˚
ard, I., and Orlandi, C. (2014). Publicly
auditable secure multi-party computation. In Interna-
tional Conference on Security and Cryptography for
Networks, pages 175–196. Springer.
Baum, C., Orsini, E., and Scholl, P. (2016). Efficient se-
cure multiparty computation with identifiable abort.
In Theory of Cryptography: 14th International Con-
ference, TCC 2016-B, Beijing, China, October 31-
November 3, 2016, Proceedings, Part I 14, pages 461–
490. Springer.
Baum, C., Orsini, E., Scholl, P., and Soria-Vazquez, E.
(2020). Efficient constant-round mpc with identi-
fiable abort and public verifiability. In Advances
in Cryptology–CRYPTO 2020: 40th Annual Interna-
tional Cryptology Conference, CRYPTO 2020, Santa
Barbara, CA, USA, August 17–21, 2020, Proceedings,
Part II, pages 562–592. Springer.
Beaver, D. (1997). Commodity-based cryptography. In Pro-
ceedings of the twenty-ninth annual ACM symposium
on Theory of computing, pages 446–455.
Bellare, M., Hoang, V. T., and Rogaway, P. (2012). Founda-
tions of garbled circuits. In Proceedings of the 2012
ACM conference on Computer and communications
security, pages 784–796.
Benhamouda, F., Halevi, S., and Halevi, T. (2019). Sup-
porting private data on hyperledger fabric with secure
multiparty computation. IBM Journal of Research and
Development, 63(2/3):3–1.
Bestavros, A., Lapets, A., and Varia, M. (2017). User-
centric distributed solutions for privacy-preserving an-
alytics. Communications of the ACM, 60(2):37–39.
Byali, M., Chaudhari, H., Patra, A., and Suresh, A.
(2019). Flash: fast and robust framework for privacy-
preserving machine learning. Cryptology ePrint
Archive.
Canetti, R. (2001). Universally composable security: A new
paradigm for cryptographic protocols. In Proceedings
42nd IEEE Symposium on Foundations of Computer
Science, pages 136–145. IEEE.
Cartlidge, J., Smart, N. P., and Talibi Alaoui, Y. (2019).
Mpc joins the dark side. In Proceedings of the 2019
ACM Asia Conference on Computer and Communica-
tions Security, pages 148–159.
Cordi, C., Frank, M. P., Gabert, K., Helinski, C., Kao, R. C.,
Kolesnikov, V., Ladha, A., and Pattengale, N. (2022).
Auditable, available and resilient private computation
on the blockchain via mpc. In International Sym-
posium on Cyber Security, Cryptology, and Machine
Learning, pages 281–299. Springer.
Cui, H., Wang, X., Yang, K., and Yu, Y. (2023). Ac-
tively secure half-gates with minimum overhead un-
der duplex networks. In Advances in Cryptology–
EUROCRYPT 2023: 42nd Annual International Con-
ference on the Theory and Applications of Crypto-
graphic Techniques, Lyon, France, April 23–27, 2023,
Proceedings, Part II, pages 35–67. Springer.
Damg
˚
ard, I., Geisler, M., and Nielsen, J. B. (2010). From
passive to covert security at low cost. In Theory of
Cryptography Conference, pages 128–145. Springer.
Damg
˚
ard, I., Pastro, V., Smart, N., and Zakarias, S. (2012).
Multiparty computation from somewhat homomor-
phic encryption. In Annual Cryptology Conference,
pages 643–662. Springer.
Dittmer, S., Ishai, Y., Lu, S., and Ostrovsky, R. (2022). Au-
thenticated garbling from simple correlations. In Ad-
vances in Cryptology–CRYPTO 2022: 42nd Annual
International Cryptology Conference, CRYPTO 2022,
Santa Barbara, CA, USA, August 15–18, 2022, Pro-
ceedings, Part IV, pages 57–87. Springer.
Evans, D., Kolesnikov, V., and Rosulek, M. (2017). A
pragmatic introduction to secure multi-party compu-
tation. Foundations and Trends® in Privacy and Se-
curity, 2(2-3).
Evans, D., Kolesnikov, V., Rosulek, M., et al. (2018). A
pragmatic introduction to secure multi-party compu-
tation. Foundations and Trends® in Privacy and Se-
curity, 2(2-3):70–246.
Gao, H., Ma, Z., Luo, S., and Wang, Z. (2019). Bfr-mpc: a
blockchain-based fair and robust multi-party compu-
tation scheme. IEEE access, 7:110439–110450.
Goldreich, O. (2009). Foundations of cryptography: vol-
ume 2, basic applications. Cambridge university
press.
Goyal, V., Ishai, Y., Sahai, A., Venkatesan, R., and Wadia,
A. (2010). Founding cryptography on tamper-proof
hardware tokens. In Theory of Cryptography Confer-
ence, pages 308–326. Springer.
Goyal, V., Mohassel, P., and Smith, A. (2008). Efficient
two party and multi party computation against covert
adversaries. In Annual International Conference on
the Theory and Applications of Cryptographic Tech-
niques, pages 289–306. Springer.
Graf, M., K
¨
usters, R., and Rausch, D. (2023). Auc: Ac-
countable universal composability. In 2023 IEEE
Symposium on Security and Privacy (SP), pages
1148–1167. IEEE.
Guo, C., Katz, J., Wang, X., and Yu, Y. (2020). Efficient and
secure multiparty computation from fixed-key block
ciphers. In 2020 IEEE Symposium on Security and
Privacy (SP), pages 825–841. IEEE.
Hastings, M., Hemenway, B., Noble, D., and Zdancewic,
S. (2019). Sok: General purpose compilers for secure
multi-party computation. In 2019 IEEE symposium on
security and privacy (SP), pages 1220–1237. IEEE.
Hazay, C., Scholl, P., and Soria-Vazquez, E. (2020). Low
cost constant round mpc combining bmr and oblivious
transfer. Journal of Cryptology, 33(4):1732–1786.
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
59
Hemminger, S. et al. (2005). Network emulation with
netem. In Linux conf au, volume 5, page 2005. Cite-
seer.
Hong, C., Katz, J., Kolesnikov, V., Lu, W.-j., and Wang,
X. (2019). Covert security with public verifiability:
Faster, leaner, and simpler. In Annual International
Conference on the Theory and Applications of Cryp-
tographic Techniques, pages 97–121. Springer.
Ishai, Y., Ostrovsky, R., and Zikas, V. (2014). Secure multi-
party computation with identifiable abort. In Annual
Cryptology Conference, pages 369–386. Springer.
Katz, J., Ranellucci, S., Rosulek, M., and Wang, X. (2018).
Optimizing authenticated garbling for faster secure
two-party computation. In Annual International Cryp-
tology Conference, pages 365–391. Springer.
Keller, M., Orsini, E., and Scholl, P. (2015). Actively se-
cure ot extension with optimal overhead. In Annual
Cryptology Conference, pages 724–741. Springer.
Keller, M., Pastro, V., and Rotaru, D. (2018). Overdrive:
Making spdz great again. In Annual International
Conference on the Theory and Applications of Cryp-
tographic Techniques, pages 158–189. Springer.
Kolesnikov, V. and Malozemoff, A. J. (2015). Public verifi-
ability in the covert model (almost) for free. In Iwata,
T. and Cheon, J. H., editors, Advances in Cryptology
ASIACRYPT 2015, pages 210–235, Berlin, Heidel-
berg. Springer Berlin Heidelberg.
Kolesnikov, V. and Schneider, T. (2008). Improved garbled
circuit: Free xor gates and applications. In Interna-
tional Colloquium on Automata, Languages, and Pro-
gramming, pages 486–498. Springer.
Koti, N., Kukkala, V. B., Patra, A., and Raj Gopal, B.
(2022). Pentagod: Stepping beyond traditional god
with ve parties. In Proceedings of the 2022 ACM
SIGSAC Conference on Computer and Communica-
tions Security, pages 1843–1856.
K
¨
usters, R., Liedtke, J., M
¨
uller, J., Rausch, D., and Vogt,
A. (2020). Ordinos: a verifiable tally-hiding e-voting
system. In 2020 IEEE European Symposium on Secu-
rity and Privacy (EuroS&P), pages 216–235. IEEE.
K
¨
usters, R., Truderung, T., and Vogt, A. (2010). Account-
ability: definition and relationship to verifiability. In
Proceedings of the 17th ACM conference on Com-
puter and communications security, pages 526–535.
Li, P., Li, J., Huang, Z., Li, T., Gao, C.-Z., Yiu, S.-M., and
Chen, K. (2017). Multi-key privacy-preserving deep
learning in cloud computing. Future Generation Com-
puter Systems, 74:76–85.
Lindell, Y. (2016). Fast cut-and-choose-based protocols for
malicious and covert adversaries. Journal of Cryptol-
ogy, 29(2):456–490.
Lindell, Y. (2017). How to simulate it–a tutorial on the sim-
ulation proof technique. Tutorials on the Foundations
of Cryptography, pages 277–346.
Lindell, Y. and Pinkas, B. (2015). An efficient protocol for
secure two-party computation in the presence of mali-
cious adversaries. Journal of Cryptology, 28(2):312–
350.
Liu, J., He, X., Sun, R., Du, X., and Guizani, M. (2021).
Privacy-preserving data sharing scheme with via
mpc in financial permissioned blockchain. In ICC
2021-IEEE International Conference on Communica-
tions, pages 1–6. IEEE.
Liu, X., Deng, R. H., Yang, Y., Tran, H. N., and Zhong, S.
(2018). Hybrid privacy-preserving clinical decision
support system in fog–cloud computing. Future Gen-
eration Computer Systems, 78:825–837.
Lu, Y., Zhang, B., Zhou, H.-S., Liu, W., Zhang, L.,
and Ren, K. (2021). Correlated randomness tele-
portation via semi-trusted hardware—enabling silent
multi-party computation. In European Symposium
on Research in Computer Security, pages 699–720.
Springer.
Nie, L., Yao, S., and Liu, J. (2023). Secure multiparty com-
putation with identifiable abort and fairness. In 2023
7th International Conference on Cryptography, Secu-
rity and Privacy (CSP), pages 99–106. IEEE.
Nordholt, P. S. and Toft, T. (2017). Confidential bench-
marking based on multiparty computation. In Finan-
cial Cryptography and Data Security: 20th Interna-
tional Conference, FC 2016, Christ Church, Barba-
dos, February 22–26, 2016, Revised Selected Papers,
volume 9603, page 169. Springer.
Rivinius, M., Reisert, P., Rausch, D., and K
¨
usters, R.
(2022). Publicly accountable robust multi-party com-
putation. In 2022 IEEE Symposium on Security and
Privacy (SP), pages 2430–2449. IEEE.
Roy, L. (2022). Softspokenot: Quieter ot extension
from small-field silent vole in the minicrypt model.
Springer-Verlag.
Scholl, P., Simkin, M., and Siniscalchi, L. (2021). Multi-
party computation with covert security and public ver-
ifiability. Cryptology ePrint Archive.
So, J., G
¨
uler, B., and Avestimehr, A. S. (2021). Coded-
privateml: A fast and privacy-preserving framework
for distributed machine learning. IEEE Journal on Se-
lected Areas in Information Theory, 2(1):441–451.
Wang, X., Malozemoff, A. J., and Katz, J. (2022). Emp-
toolkit: Efficient multiparty computation toolkit.
Wang, X., Ranellucci, S., and Katz, J. (2017). Authenti-
cated garbling and efficient maliciously secure two-
party computation. In Proceedings of the 2017 ACM
SIGSAC conference on computer and communications
security, pages 21–37.
Yang, K., Wang, X., and Zhang, J. (2020). More effi-
cient mpc from improved triple generation and au-
thenticated garbling. In Proceedings of the 2020 ACM
SIGSAC Conference on Computer and Communica-
tions Security, pages 1627–1646.
Yao, A. C.-C. (1986). How to generate and exchange se-
crets. In 27th annual symposium on foundations of
computer science (Sfcs 1986), pages 162–167. IEEE.
Zahur, S., Rosulek, M., and Evans, D. (2015). Two halves
make a whole. In Annual International Conference on
the Theory and Applications of Cryptographic Tech-
niques, pages 220–250. Springer.
Zhou, J., Feng, Y., Wang, Z., and Guo, D. (2021). Using
secure multi-party computation to protect privacy on
a permissioned blockchain. Sensors, 21(4):1540.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
60