Key Management as a Service
Liran Lerman
1,2
, Olivier Markowitch
1
and Jorge Nakahara Jr
1
1
Quality and Security of Information Systems, D´epartement d’informatique,
Universit´e Libre de Bruxelles, Bruxelles, Belgium
2
Machine Learning Group, D´epartement d’informatique, Universit´e Libre de Bruxelles, Bruxelles, Belgium
Keywords:
Cloud Computing, Key Management, Threshold Cryptosystem, Protocol.
Abstract:
In this paper we consider the security issues related to the key management in cloud computing. We focus
on the difficulty of managing cryptographic keys necessary to maintain for example the confidentiality of
information stored in the clouds. In this framework, we present a threshold cryptosystem as well as three
protocols, based on cooperation between cloud providers and a random number generator which is a trusted
third party, that covers the issue of key management.
1 INTRODUCTION
The advent of cloud computing
1
can be seen as a nat-
ural step in the evolution of on-demand information
technology services and products. Indeed we find in
cloud computing concepts from grid computing, vir-
tualization, etc.
International Data Conseil (IDC) estimated that
cloud computing services represented 5% of global
IT investment in 2009 and could catch 10% in
2013 (European Union authorities estimate a similar
growth)(Syntec informatique, 2010). It seems that we
are not facing a fad but rather a new stage of the In-
ternet evolution.
There are many benefits in moving to the clouds.
For example, many small and medium enterprises
(SME) consider the adoption of cloud computing so-
lutions in order to reduce their investment in hard-
ware, software and support, as well as to optimize the
managementof their needed resources; moreoverthey
havea feeling of security in case of disasters (Hogben,
2009; Sogeti, 2009).
On the other hand, the concerns of companies are
mainly focused on data security issues (Mather et al.,
2009), and more precisely issues related to confiden-
tiality
2
, availability, integrity, repudiation and loss of
data/services (Hogben, 2009; Catteddu and Hogben,
1
E.g. Google (Google App Engine), Microsoft (Mi-
crosoft Azure), IBM (IBM Smart Business Service), Ama-
zon (Amazon EC2), VMWARE (VMWARE vCloud), EMC
(EMC Atmos), Salesforce, etc.
2
Confidentiality appears in a Top 10 obstacles to and op-
portunities for growth of cloud computing (Armbrust et al.,
2009).
2009; Gellman, 2009; Mather et al., 2009)
3
. These
issues rely indeed on the fact that users’ data and ap-
plications reside on an infrastructure owned by a third
party resulting in a loss of control of their data. This
shortcoming can be mitigated by using cryptographic
techniques.
Following (Geater, 2011), “more people are using
more cryptographic keys than ever before, and cryp-
tographic is meaningless without strong key manage-
ment”. It is indeed obvious that without a strong key
management, applied cryptography is useless. We
have to create, store and remove cryptographic keys
properly.
In this paper we focus on long-term cryptographic
keys (e.g. keys that are used to ensure the security
of data stored on cloud infrastructures, rather than on
short term keys such as ones in the Transport Layer
Security for example). Furthermore, for the sake of
simplicity, we focus here on the key management in
the context of data encryption.
There are many possible key management proto-
cols in the framework of cloud computing.
First of all, we can transpose as is all the key man-
agement into the cloud infrastructure. In other words,
all the generation, storage, and replacement of keys
would be realized by the cloud provider itself. In this
case, cloud providers still have access to sensitive en-
crypted data and keys to decrypt it. Trust is therefore
the biggest issue as quoted in (Urquhart, 2009). Even
if there are access control and Service Level Agree-
ment (SLA) (Kandukuri et al., 2009) which can stip-
3
Security is considered as a concern for 74.6% of North
American companies interested in cloud computing (Sogeti,
2009).
276
Lerman L., Markowitch O. and Nakahara Jr J..
Key Management as a Service.
DOI: 10.5220/0004071102760281
In Proceedings of the International Conference on Security and Cryptography (SECRYPT-2012), pages 276-281
ISBN: 978-989-8565-24-2
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
ulate that no one can decrypt data without authoriza-
tion, there are many risks such as loss of isolations be-
tween users (named tenants)
4
(Ristenpart et al., 2009;
Catteddu and Hogben, 2009), malicious employees
within the cloud provider (Rocha and Correia, 2011;
Catteddu and Hogben, 2009), inefficient or unreliable
removal of data
5
(Catteddu and Hogben, 2009), in-
formation leakage from indexing (Squicciarini et al.,
2010), etc.
On the other hand, a cloud consumer can store its
encrypted data on a cloud infrastructure and its cryp-
tographic keys in another cloud infrastructure(option-
ally in many others). In this case, an opponent has to
access several cloud providers in order to obtain the
decrypted information. The drawback of this solution
is that it forces the consumer to manage several con-
tracts, one for each cloud provider.
Another solution is to keep all key management
in the customer’s data center. In this case, keys are
stored on-premise and encrypted data are stored off-
premise. It is probably the safest solution if we have
a good local key management, but the resulting archi-
tecture is not a complete cloud service as defined by
ENISA (Catteddu and Hogben, 2009) (e.g. shared re-
sources) and it definitely contradicts the key concept
of cloud computing: an access to a service from ev-
erywhere.
Yet another solution is to keep a part of the
key management on-premise and another part off-
premise: for each encrypted data, the decryption key
could be divided, for example, in two subkeys, one
stored on the cloud provider and another on-premise.
Another example is when each key stored in the cloud
provider is encrypted by (a single key) keys stored on-
premise. In this case, no one in the cloud can find
the secret key without having access to the key man-
agement on-premise. However, these two solutions
are probably the worst in terms of storage, customers
having to store keys on-premise and off-premise.
In brief, we can see that we are facing a dilemma
specific to the cloud: we do not want to store keys
4
Not all virtualization softwares are bug-free.
5
Security problems may happen when a client wants to
remove an information stored within his cloud provider.
The cloud computing hardware being also accessible to
other tenants, the cloud provider must be careful that the
deleted information are not made available to other clients
(at the time of a memory allocation for example). In addi-
tion, copies of the same data can be performed by the cloud
provider for issues of availability and redundancy; and be-
cause removing data in all places where it is stored is not
necessarily trivial, deleting an information often results in
dereferencing the corresponding data rather than a physical
removal. Logfiles could also allow to recover deleted infor-
mation.
on-premise neither on the cloud but we want to se-
cure our strategic data which can be on a server also
accessed by competitors. As quoted in (Mather et al.,
2009), proper key management is a complex and dif-
ficult task in the cloud. The aim of this paper is to
propose solutions for this problem of key manage-
ment that are based both on the inter-cloud approach
and on threshold cryptosystem. The proposed solu-
tions are not specific to SaaS (Software as a service),
PaaS (Platform as a service) or IaaS (Infrastructure
as a service). They can be implemented in each of
these types of cloud service but in the case of SaaS
the cloud provider has to implement them, while in
the cases of PaaS and IaaS the customer has to realize
these implementations.
The paper is organizedas follows. In section 2, we
introduce briefly our solution and the key concepts.
In section 3 we detail our solution and we conclude in
section 4.
2 OVERVIEW OF THE
SOLUTION
In this section, we present an overview of our solu-
tion. Our aim is to increase the confidence of users
towards cloud providers but in a way that ensures that
neither a cloud provider nor an opponent can have ac-
cess to (decrypted) users’ information. For this, our
solution is based on three concepts which will be de-
tailed in the next sections: interCloud, threshold cryp-
tosystem and Fast Random Number Generator.
2.1 interCloud
The interCloud is an interconnection of cloud
providers as the internet became a network of net-
works. It was first presented in 2007 by Kevin Kelly
(Kelly, K., 2007) and detailed in (Bernstein et al.,
2009; Celesti et al., 2010; Buyya et al., 2010).
A metaphor of the interCloud is the insurance
field. Each insurance can insure rare events affecting
a small part of a place. However, they cannot insure
alone against a risk of tsunami (for example). In this
special case, the event is rare but it affects a large part
of their customers. In order to confront these events,
insurers join in a group allowing them to share the
risks.
The situation is similar when considering cloud
infrastructures. Cloud providers could be grouped
(e.g. in order to share the risks of saturation of re-
sources). We base our solution on this concept and
KeyManagementasaService
277
call it Key Management as a Service (KMaaS)
6
.
2.2 Threshold Cryptosystems
Our solution is also based on a new (t, n + 1)-
threshold cryptosystem.
Following the classical idea, we can fit a unique
polynomial f(x) = a
0
+a
1
x+a
2
x
2
+. . . +a
t1
x
t1
of
degree t 1 with t points where the first coefficient
a
0
is the secret information shared between n cloud
providers (symbolized byCC
1
,CC
2
, ...,CC
n
) (Shamir,
1979). Note that a
0
and a
t1
must be nonzero. In
order to share it, the random number generator (RNG)
provides the secret value of f(i) and the public value
of i to the i
th
cloud provider and the values f(n+ 1)
and n+ 1 to the user.
Assume that a customer U has to send a secret in-
formation m N \ {0} to his cloud provider CC
1
. Let
A
p
B : x denotes Alice sending an information x to
Bob through a private channel (e.g. secured by Trans-
port Layer Security) and A B : x with a public chan-
nel. Let also CC
i
denotes the i
th
cloud provider. Our
symmetric encryption threshold cryptosystem works
as follows:
Let g(i) =
j∈{1;n+1}\{i}
j
ji
The RNG generates t random {a
0
, a
1
, ..., a
t1
}
and constructs the polynomial f(x)
RNG
p
CC
i
: {i; f(i) = K
CC
i
} i {1, 2, ..., n}
RNG
p
U : {n + 1; f(n+ 1) = K
U
}
U
p
CC
1
: m+ f(n+ 1)g(n+ 1) = E
K
U
(m) = m
1
CC
1
p
CC
2
: m
1
+ f(1)g(1) = E
K
CC
1
(m
1
) = m
2
. . .
CC
n
p
CC
1
: m
n
+ f(n)g(n) = E
K
CC
n
(m
n
) = m +
a
0
E
K
(m) is the One Time pad encryption of a mes-
sage m using a secret key K. In order to be consis-
tent with the threshold cryptosystem paradigm we use
n cloud providers in order to encrypt a data while at
least t cloud providers are sufficient.
Finally, CC
1
has m+ a
0
and a part of the shared
key K
CC
1
= f(1), while the i
th
cloud provider has
K
CC
i
= f(i). The decryption protocol applies the
threshold cryptosystem backwards. On the basis of
the f(i) secretly kept by t (where 1 t n) cloud
providers the cryptosystems reconstructs the polyno-
mial needed to obtain a
0
and subtracting it to m +
a0. More precisely the i
th
cloud provider computes
6
We are aware that many challenges have to be sur-
passed before the advent of interCloud but research such
in (Bernstein et al., 2009) show progression in this field.
E
K
CC
i
(m
i
) = m
i
+ f(i)g(i) during the encryption pro-
cess and D
K
CC
i
(m
i
) = m
i
f(i)g(i) during the decryp-
tion process. Note that g(i) depends on the number of
cloud providers involved in the encryption/decryption
process. Therefore, at the time of the encryption,
adding n terms is equivalent to only add t terms (the
additional added terms do not change the polyno-
mial). In other words t distinct points (shares) are
enough to interpolate a polynomial of degree t 1.
The additional n t points will fall in the already
known polynomial curve and so will not lead to a dif-
ferent polynomial.
Note that the RNG should not keep nor leak the
shares or the polynomial coefficients. In other words
it does not need to store anything to allow protocols
to run. Moreover f(n+ 1) and f(0) must have at least
the same length than m.
Each time a customer has to send a confidential
information to its cloud provider, encryptions of this
information have to be realized by n cloud providers.
Therefore we focused on a solution based on a sym-
metric algorithm for performance reasons.
Furthermore, cloud computing involves a mass
consumption of cryptography. In order to generate
the n + 1 keys each time when the customer has to
encrypt a data, we must have a fast RNG. Indeed the
new symmetric threshold cryptosystem E
k
is a variant
of the One Time Pad which involves one random key
shared between n entities.
The main advantage of this cryptosystem is that
it is a commutative symmetric (t, n + 1)-threshold
cryptosystem (i.e. E
CC
i
(E
CC
j
(m)) = E
CC
j
(E
CC
i
(m)))
which is essential for the following protocols. Fur-
thermore it allows to encrypt/decrypt a data as long
as there are at least t available honest cloud providers.
On the other hand the confidentiality of a data is as-
sured as long as there are less than t dishonest cloud
providers. Therefore the security of the scheme is
based on the One Time pad which is unconditionally
secure ; furthermore the private communication chan-
nels prevent any leak of keys or messages without the
commitment of cloud providers.
3 PROTOCOLS
In this section we propose three protocols intended to
address the security issues previously described.
In order to keep our protocols clear and readable,
we assume that the entities involved in the protocols
are authenticated and that the message integrity is
guaranteed during the transactions.
The ideas behind our protocols are manifold:
SECRYPT2012-InternationalConferenceonSecurityandCryptography
278
1. we assume that each cloud provider will offer
some services to the other cloud providers
2. the cloud providers are trusted to behave accord-
ingly to the protocol (i.e. they provide correctly
their share) but they are not trusted to stay out of
a coalition. Therefore we consider than no more
than t cloud providers would try to collude
3. we will use the (t, n + 1)-threshold cryptosystem,
presented in section 2.2, in order to spread en-
crypted data among several cloud providers (say
n cloud providers) in such a way that t n + 1
cloud providers are enough to decrypt the data
4. each encrypted information is signed in order to
ensure that all cloud providers have participated
in the encryption process and therefore to ensure
it is not CC
1
who made the whole protocol alone
5. The RNG is a trusted third party. Its role is similar
as to one of a dealer in secret sharing schemes.
3.1 Protocol 1
In the first protocol we assume that the cloud provider
CC
1
keeps most of the keys while the user U has only
to store and secure one asymmetric private key. Fur-
thermore the encrypted information as well as digital
signatures related to these information are stored on
CC
1
To encrypt an information m, a (t, n)-threshold
cryptosystem similar to the one seen previously is
used. The main difference is that the user is not in-
volved in the threshold cryptosystem.
Let u
m
be an identification token of the message
m (e.g. u
m
= h(m) k U k CC k d, where h is a colli-
sion resistant hash function, k denotes concatenation
of bitstrings and d represents a time stamp).
The protocol is as follows:
RNG
p
CC
i
: {i;K
CC
i
};i [1, n]
U
p
CC
1
: E
k
U
(m);u
m
CC
1
p
CC
2
: E
K
CC
1
(E
k
U
(m));
sig
CC
1
(u
m
);E
k
U
(1 k K
CC
1
)
CC
2
p
CC
3
: E
K
CC
2
(E
K
CC
1
(E
k
U
(m)));
sig
CC
2
(sig
CC
1
(u
m
));E
k
U
(1 k K
CC
1
);E
k
U
(2 k K
CC
2
)
. . .
CC
n
p
CC
1
: E
K
CC
n
(. . . E
K
CC
1
(E
k
U
(m)). . .);
sig
CC
n
(. . . sig
CC
1
(u
m
). . .);
E
k
U
(1 k K
CC
1
);. . . ;E
k
U
(n k K
CC
n
)
In this protocol E
k
U
(m) represents an asymmetric
encryption of a message m under the user’s public key
k
U
. Note that sig
A
(u
m
) is the digital signature realized
by Alice (using her signature private key) on u
m
. This
allows to the user to verify that every cloud provider
has participated in the encryption process. Further-
more notice that CC
1
signs u
m
and not the secret mes-
sage m in order to give minimum information to an
attacker about m.
When the user wants to access his information, the
decryption works as follows:
U
p
CC
1
: u
m
CC
1
p
U : E
K
CC
|t|
(. . . E
K
CC
|1|
(E
k
U
(m)). . .);
E
k
U
(|t| k K
CC
|t|
);. . . ;E
k
U
(|1| k K
CC
|1|
)
Thanks to the encryption scheme the user has to re-
move only t encryption layers among n. Note that
CC
| j|
denotes one among the n cloud providers. In
other words CC
| j|
is not specifically the j
th
cloud
provider (i.e. CC
| j|
= CC
i
j
where i
j
{2, ..., n}).
The user must store only one key whereas in order
to retrieve a secret message, an opponent has to attack
U andCC
1
. It is similar to the situation where we have
one cloud provider and one user. The message m is
stored in an encryption form in CC
1
helped by key
k
U
. Nevertheless we present this protocol as a first
step in the design of our solution of key management.
3.2 Protocol 2
In this second protocol the cloud provider CC
1
has
to store a signature, a portion of the key and the en-
crypted information while the other providers, as well
as the user U have to store a portion of the key.
The encryption protocol works as follows:
RNG
p
CC
i
: {i;K
CC
i
};i [1, n]
RNG
p
U : {n + 1;K
U
}
U
p
CC
1
: E
K
U
(m);u
m
CC
1
p
CC
2
: E
K
CC
1
(E
K
U
(m));sig
CC
1
(u
m
);u
m
CC
2
p
CC
3
: E
K
CC
2
(E
K
CC
1
(E
K
U
(m)));
sig
CC
2
(sig
CC
1
(u
m
));u
m
. . .
CC
n
p
CC
1
:
E
K
CC
n
(E
K
CC
n1
(. . . E
K
CC
1
(E
K
U
(m)). . .));
sig
CC
n
(sig
CC
n1
(. . . sig
CC
1
(u
m
). . .));u
m
When the user wants to access his information, the
decryption works as follows:
U CC
1
: u
m
CC
1
p
CC
|t|
:
E
K
CC
|t|
(E
K
CC
|t1|
(. . . E
K
CC
|2|
(E
K
U
(m)). . .));u
m
KeyManagementasaService
279
CC
|t|
p
CC
|t1|
:
E
K
CC
|t1|
(. . . E
K
CC
|2|
(E
K
U
(m)). . .);u
m
. . .
CC
|2|
p
U : E
K
U
(m);u
m
The commutative encryption scheme used in this pro-
tocol allows each chosen cloud provider i among n
cloud providers to remove his encryption layer.
Compared to the previous protocol, in order to re-
cover an information m, an attacker must attackU and
CC
|i|
i [1, t].
A drawback is that if more than n-t cloud
providers disappear after the encryptionphase, the en-
crypted information cannot be deciphered anymore.
Therefore, the values of t and n shall be carefully cho-
sen to strike a balance between the minimum number
of t of honest shareholders (to avoid coalitions of dis-
honest players), and the number n of total shareholder
to avoid disruption of the protocol.
The next protocol, which is the added value of our
paper, shows how we can have the same security that
this protocol but without having the user to keep a key
locally.
3.3 Protocol 3
In this last protocol, the aim is to have only the cloud
providers that store and secure cryptographic keys.
The cloud provider CC
1
has to store encrypted infor-
mation, a portion of the key and a signature while the
other providers have to store a portion of the key. The
user U does not store any key.
The protocol works as follows:
RNG
p
CC
i
: {i;K
CC
i
};i [1, n]
RNG
p
U : {n+ 1;K
U
}
U
p
CC
1
: E
K
U
(m);u
m
CC
1
p
CC
2
: E
K
CC
1
(E
K
U
(m));sig
CC
1
(u
m
);u
m
CC
2
p
CC
3
: E
K
CC
2
(E
K
CC
1
(E
K
U
(m)));
sig
CC
2
(sig
CC
1
(u
m
));u
m
. . .
CC
n
p
CC
1
:
E
K
CC
n
(E
K
CC
n1
(. . . E
K
CC
1
(E
K
U
(m)). . .));
sig
CC
n
(sig
CC
n1
(. . . sig
CC
1
(u
m
). . .));u
m
When the user wants to access his information, the decryp-
tion works as follows:
U CC
1
: u
m
CC
1
U :
E
K
CC
|t|
(. . . E
K
CC
|2|
(E
K
CC
1
(m). . .));u
m
RNG
p
U : K
U
p
CC
1
: E
K
(E
K
CC
1
(. . . E
K
CC
|t|
(m). . .));u
m
CC
1
p
CC
|t|
: E
K
(E
K
CC
|2|
(. . . E
K
CC
|t|
(m). . .));u
m
CC
|t|
p
CC
|t1|
:
E
K
(E
K
CC
|2|
(. . . E
K
CC
|t2|
(m). . .));u
m
. . .
CC
|2|
p
U : E
K
(m);u
m
Note that the user U does not store the key, he
uses it only at the time of the encryption process ;but
since only t among n participants are needed to deci-
pher the information, in the decryption protocol, the
encrypted information can also be seen as the result of
the encryption of the information by t different cloud
providers. In the decryption protocol, the user real-
izes also an encryption with a fresh key K in order
to guarantee the confidentiality of his information in
regard to the last cloud provider who sends him the
requested information. This fresh key is independent
from the other keys K
CC
i
and from K
U
.
Even if a tenant can see the encrypted informa-
tion, he does not have the key(s) to decrypt it. The
same idea is relevant when there is a malicious entity
within a cloud provider: no cloud provider has alone
the secret key(s) needed to decrypt the secret informa-
tion.
Since the transmitted data are encrypted, no one
can understand the confidential information neither in
the case where the data are transmitted between cloud
provider’s servers nor in the case where the data are
transmitted between the cloud provider and the user.
The only security risk is a coalition of dishonest
cloud providers which is relaxed with a high t rela-
tively to n.
When dealing with n cloud providers, it suffices
that at least n t + 1 cloud providers destroy their
shares K
CC
i
to make recovery of m infeasible from
CC
1
. Moreover there are no logfile links to the stored
information m.
4 CONCLUSIONS
In this paper, we have considered the security issues
related to the use of cloud computing infrastructures.
We have proposed three protocols that allow to man-
age securely the needed cryptographic keys.
Our protocols are based on an (t, n+ 1)-threshold
cryptosystem as well as on the principle of mutual co-
operation between a set of cloud providers (therefore
we can see it as a fair exchange of KMaaS between
several cloud providers).
The main disadvantage of our third protocol is that
each time when the user encrypts (respectively de-
crypts) a data, n+ 1 (respectively t) participants have
to be involved. Moreover, each (encrypted) data has
SECRYPT2012-InternationalConferenceonSecurityandCryptography
280
to be send to each participant. In other words, n, t
and the size of each sent data have to be carefully
chosen depending on the capacity of the interCloud
system. For example, for a fixed value of n, when
t increases the number of communications between
the cloud providers increases but the risk of coalition
between cloud providers (in order to break the user
confidentiality) is reduced.
However, the purposes of our third protocol are
twofold. The client uses cryptographic keys with-
out having to hold them locally. Moreover the cloud
provider cannot retrieve client’s information alone.
Indeed the cloud providers retain only incomplete
data.
We may assume the use of true random val-
ues when generating cryptographic keys in order to
achieve the highest level of security in this matter.
Moreover considering the specific context of cloud
computing, given the significant available comput-
ing power as well as the potential massive use of
random numbers, it is reasonable to target a higher
level of security in order to be protected against at-
tacks (Goldberg and Wagner, 1996; Woolley et al.,
2008; Garfinkel and Rosenblum, 2005; Ristenpart and
Yilek, 2010). Finally, we propose to use a fast random
number generator for performance reason and a true
random number generator for security reason.
The main issue is to know how to chose the t hon-
est cloud providers to decrypt a data. There are not yet
solution to find who are the dishonest cloud providers.
Future work will focus on this interesting issue.
REFERENCES
Armbrust, M., Fox, A., Griffith, R., Joseph, A., Katz,
R., Konwinski, A., Lee, G., Patterson, D., Rabkin,
A., Stoica, I., and Zaharia, M. (2009). Above the
clouds: A berkeley view of cloud computing. Tech-
nical report, EECS Department, University of Califor-
nia, Berkeley.
Bernstein, D., Ludvigson, E., Sankar, K., Diamond, S., and
Morrow, M. (2009). Blueprint for the intercloud - pro-
tocols and formats for cloud computing interoperabil-
ity. In ICIW, pages 328–336. IEEE Computer Society.
Buyya, R., Ranjan, R., and Calheiros, R. N. (2010). Inter-
cloud: Utility-oriented federation of cloud computing
environments for scaling of application services. In
Hsu, C.-H., Yang, L. T., Park, J. H., and Yeo, S.-S.,
editors, ICA3PP, volume 6081 of LNCS, pages 13–31.
Springer.
Catteddu, D. and Hogben, G. (2009). Cloud computing:
benefits, risks and recommendations for information
security. Technical report, ENISA.
Celesti, A., Tusa, F., Villari, M., and Puliafito, A. (2010).
How to enhance cloud architectures to enable cross-
federation. In International Conference on Cloud
Computing, CLOUD ’10, pages 337–345. IEEE Com-
puter Society.
Garfinkel, T. and Rosenblum, M. (2005). When virtual is
harder than real: security challenges in virtual ma-
chine based computing environments. In Conference
on Hot Topics in Operating Systems, volume 10 of
HOTOS’05, pages 20–20. USENIX Association.
Geater, J. (2011). Comment: Key management strate-
gies in the cloud’. http://www.infosecurity-
magazine.com/view/18818/comment-key-
management-strategies-in-the-cloud.
Gellman, R. (2009). Privacy in the clouds : Risks to pri-
vacy and confidentiality from cloud. Violence Against
Women, pages 1–26.
Goldberg, I. and Wagner, D. (1996). Randomness and
the netscape browser. In International Conference on
Template Production. Dr. Dobb’s Journal.
Hogben, G. (July 2009). Privacy, Security and Identity in
the Cloud. ENISA.
Kandukuri, B., Paturi, R., and Rakshit, A. (2009). Cloud se-
curity issues. In International Conference on Services
Computing, SCC’09, pages 517–520. IEEE Computer
Society.
Kelly, K.(2007). http://www.kk.org/thetechnium/archives/2
007/11/a
cloudbook for.php.
Mather, T., Kumaraswamy, S., and Latif, S. (2009). Cloud
Security and Privacy: An Enterprise Perspective on
Risk and Compliance. O’Reilly.
Ristenpart, T., Tromer, E., Shacham, H., and Savage, S.
(2009). Hey, you, get off of my cloud: exploring in-
formation leakage in third-party compute clouds. In
Conference on Computer and Communications Secu-
rity, pages 199–212. ACM.
Ristenpart, T. and Yilek, S. (2010). When good random-
ness goes bad: Virtual machine reset vulnerabilities
and hedging deployed cryptography. In NDSS. The
Internet Society.
Rocha, F. and Correia, M. (2011). Lucy in the sky with-
out diamonds: Stealing confidential data in the cloud.
In International Conference on Dependable Systems
and Networks Workshops, DSNW’11, pages 129–134.
IEEE Computer Society.
Shamir, A. (1979). How to share a secret. Commun. ACM,
22(11):612–613.
Sogeti (2009). Cloud computing - etat de l’art.
Squicciarini, A., Sundareswaran, S., and Lin, D. (2010).
Preventing information leakage from indexing in the
cloud. In CLOUD, pages 188–195. IEEE.
Syntec informatique (2010). Le livre blanc du cloud
computing - tout ce que vous devez savoir sur
l’informatique dans les nuage.
Urquhart, J. (7 Jan. 2009). The Biggest Cloud-Computing
Issue of 2009 is Trust. C-Net News.
Woolley, R., Murray, M., Dounin, M., and Ermilov, R.
(2008). arc4random predictable sequence vulnera-
bility. http://security.freebsd.org/advisories/FreeBSD-
SA-08:11.arc4random.asc.
KeyManagementasaService
281