BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles An Overview of ANONIZE: A Large-Scale Anonymous Survey System

An Overview of ANONIZE: A Large-Scale Anonymous Survey System

Bookmarks

This article first appeared in IEEE Security & Privacy magazine. IEEE Security & Privacy offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.

 

A new scheme enables a survey authority to independently select a group of registered users and create a survey in which only selected users can anonymously submit exactly one response. This technology has numerous applications including university course evaluations, online product reviews, and whistleblowing.

Companies, universities, healthcare providers, and government agencies often attempt to collect data from targeted groups of users by running surveys. Such surveys aim to satisfy two basic but conflicting properties: survey results need to be authentic—that is, only a specific set of users should be allowed to submit data, and each user should be allowed to submit only once— yet must be anonymous—that is, no link should exist between users and their survey data, so users feel safer about submitt ing honest feedback.

The most straightforward way to implement authenticity is for the survey implementer to request usernames during submission, but this obviously breaks user anonymity. The most straightforward way to implement anonymity is to avoid collecting usernames during submission, but this might allow att acks in which malicious users submit hundreds of responses to skew the results.

In light of these deficiencies, we sought cryptographic solutions that provide security guarantees in which anonymity and authenticity hold. In this article, we describe our ad hoc survey scheme ANONIZE and report on its implementation for very large surveys. As far as we know, this is the first implementation of a provably secure multiparty protocol that scales to handle millions of users.

Existing Techniques

One way to address anonymity and authenticity is to employ a trusted third party to collect usernames during submission, then delete the names when providing results to the survey initiator. However, placing such trust in a survey collector might be too dangerous. Even if the survey collector intends to keep the links between users and their surveys private, its system might be stolen or broken into, and the information leaked. For instance, in 2009, a Cornell University computer was stolen that contained sensitive personal information, such as names and Social Security numbers, of more than 45,000 current and former university members.1

Even if users have full confidence in the trusted third party and its ability to keep data secure, developing an anonymous survey system using such a trusted party requires care. For example, in the implementation of course reviews, side-channel information indicating which users have already filled out the survey might leak information about the order in which students participated. Later, the order of the students’ comments in the aggregated responses might be correlated to break anonymity.

Furthermore, in many situations, jurisdictional boundaries or legal requirements make relying on trusted third parties to maintain anonymity infeasible. For instance, storing sensitive patient information on a third-party system might be illegal, and many countries don’t permit sensitive data to be stored on servers run by foreign corporations due to the potential for data seizure.

Finally, if a trusted third party removes all identifying information accompanying a submission to provide anonymity or accepts submissions from anonymized networks, then the trusted party loses the ability to verify whether a participant submits multiple responses.

Cryptographic voting techniques offer a partial solution to this problem. 2 In such schemes, each survey consists of two steps. First, users authenticate themselves to a server and anonymously check out a single-use token, which carries no link to user identity. Second, users participate in the specified survey using their token. Such schemes provide good anonymity if users separate these steps with a reasonably long time lag—otherwise, there’s a clear time link between users and their data. However, if users are required to separate the two steps by, say, one day, the survey’s ease of use is significantly hampered and becomes much less convenient than nonanonymous surveys or anonymous surveys employing a trusted third party. In addition, the extra steps required to authenticate for each survey might be onerous.

ANONIZE: Electronic Ad Hoc Surveys

We consider a general solution to the problem of anonymously collecting feedback from an authenticated group of individuals. In our ad hoc survey scheme, ANONIZE, anyone can select a group of individuals and create a survey that can be completed once by only those individuals. In addition, the survey initiator ca`n initiate this survey knowing only the identities (for instance, the email addresses) of the users in the ad hoc group; no further interaction between the survey initiator and the users is required. Our method provides essentially the same ease of use as traditional (nonanonymous) electronic surveys; thus, we expect user participation to increase, making the feedback more valuable.

A proof of security for ANONIZE’s cryptographic protocols can be found in “ANONIZE: A LargeScale Anonymous Survey System” 3 ; this proof holds even if attackers participate in an arbitrary number of concurrent surveys. ANONIZE supports millions of write-in surveys in minutes, in contrast to mix-net or zero knowledge–based voting systems, which currently can handle only thousands of votes in several hours.

The Parties and Steps

Three parties comprise an ad hoc survey system: a single registration authority (RA) that issues master user tokens, one or more survey authorities (SAs) that can create surveys, and multiple users who provide survey data. Users must first register with the RA and retrieve a secret master user token. This token can be used for all future surveys. Anyone can act as an SA by generating a unique survey ID and publishing a list of identities that are permitted to participate in that survey. This list of participants can grow dynamically, and the SA can create a survey entirely on its own without interaction with either the RA or the users. Finally, users on a survey’s list of valid identities can submit a survey response by simply routing a message to the SA via an anonymous network, such as Tor, or anonymous proxy relay.

Once all submissions are collected, the SA might publish a list of all survey responses, depending on external privacy requirements. If survey responses are made public, they can be audited. Survey responders can inspect the output to check that their submissions were counted. Moreover, anyone can check that each submission was from a unique authorized user—for example, users can check for “ballot stuffing” and verify the list of authorized users.

This technology could apply to many online survey scenarios and possibly enable new ones. We provide three detailed examples of how these surveys might be used. University Course Evaluation Most universities let students evaluate each course that they complete. These surveys typically include write-in sections in which open-ended answers are encouraged. In the past, many universities conducted these surveys on papers handed out and then collected during one of the final class sessions. However, many universities are moving to online surveys to increase participation and ease data collection. A link to an online course evaluation survey is typically emailed to all students, who must trust the website collecting their responses to keep them anonymous. As we discussed, this is a dangerous assumption, even if the website makes a good faith effort to do so. To increase student confidence in the system, we use an ad hoc survey.

Student registration. When students are asked to set up their university account information (while proving their identity using traditional, nonelectronic methods), they generate an unlinkable master user token that is tied to their school email address. This step can be done at a later stage if desired, or after a student has lost his or her credential and needs a new one, but it needs to be done only once.

Course survey setup. When administrators want to set up a course survey, they generate a survey key based on a unique survey ID, such as “Survey for CS350 for Spring 2014 by Professor Brown at Cornell University,” and a list of course participants’ email addresses.

Survey execution. After the survey is complete, the students’ client—either a computer or smartphone— combines the survey key and the master user token to generate an unlinkable one-time token. This token satisfies two properties: it carries no link to students’ identity (thus, we have anonymity), and for a given survey key, students can obtain at most one such token (thus, we ensure that each student can complete the survey only once). Survey results are tabulated and possibly announced. If results are made public, students can verify that their responses were included. Once registration is complete, setup and execution can be performed repeatedly. Participants do not need to check out a new single-use token for each survey; rather, their client uses the master user token to create a unique single-use token for each survey without any interaction that could deanonymize them.

Online Product Review

Many online retailers display a set of customer reviews next to each product. These reviews are often influential to prospective customers. To avoid returns and customer dissatisfaction, these retailers have a vested interest in the reviewers’ credibility. To bolster this credibility, many retailers indicate which reviewers they can verify purchased this product on their site. This process is currently nonanonymous; the retailer knows exactly which customer posted which review. We conjecture that a significant fraction of customers would be more likely to post a review if they could do so anonymously. We explore how ad hoc surveys can give customers this anonymity while still allowing the retailer to verify their purchases and ensure at most one review per purchase.

Customer account creation. When customers create an online account with a retailer—providing an email address and credit card to confirm identity—they are given the option to enroll in anonymous reviewing. If they choose to do so, they can then interact with the retailer to generate an unlinkable master user token that is tied to their online account identifier, such as their username or email address. This step can also be done at a later stage, but it needs to be done only once.

Product purchase transaction. Whenever customers enrolled in anonymous reviewing make a transaction, the retailer adds their online account identifier to an internal list of certified purchasers of a given product. This list, together with a product identifier, forms the survey ID.

Review execution. If customers want to post a review for a product they have purchased, their client combines the survey ID and their master user token to generate an unlinkable one-time token that they can use to complete the review. This token carries no link to user identity but can be used only once. Once the retailer receives the review with the token, which could be routed anony mously, the retailer can verify it and post it as a “verified purchase review.”

Again, once the registration step—that is, enrollment in anonymous reviewing and obtaining a master use token—is complete, users can perform an unlimited number of purchases and reviews.

In this case, the retailer helps to create the master user tokens. Alternatively, customers could obtain master user tokens by interacting with their bank or credit card company. Retailers, restaurants, service providers, and so forth could then generate lists of authorized reviewers based on the bank account or credit card number used for the purchases. The review execution would remain the same. This model could work well for websites that review other parties’ services, because the sites could guarantee that they were posting reviews of actual customers, while customers remain anonymous.

Whistleblowing

Frequently, whistleblowers want to provide information to an organization’s ombudsman about alleged misconduct. Due to fears of reprisal, many whistleblowers prefer to remain anonymous. However, upon receiving a complaint for investigation, an ombudsman should first ascertain that the source of the complaint is legitimate— say, from a verified employee—and not just sent by a random discontent. In many cases, whistleblowers might be able to prove that they are in the organization by providing information only an employee would know, but doing so could break anonymity. An ad hoc survey can give whistleblowing employees anonymity while letting an ombudsman verify that a complaint comes from within the organization.

Employee account creation. When employees first join an organization, they are registered and issued a master user token tied to their system account. Concurrently, the ombudsman adds them to a whistleblowing survey that consists of all organization employees and provides employees—in conjunction with the master user token—a signature showing that their employee ID is certified for participation in the survey. The ombudsman publishes a signed list of all participants to show that all employees can contribute.

Whistleblowing and verification. Should employees uncover illegal or unethical activities, they can write a memo to the ombudsman via the whistleblowing survey. They certify the memo with their master user token and signed employee ID on the whistleblowing participant list. Upon receipt via anonymous channel, the ombudsman can verify that the submission comes from a valid survey participant, and thus a legitimate employee.

Features and Security Requirements

There are two crucial aspects of an ad hoc survey. The first is the privacy property: even if an RA and SA are arbitrarily corrupted and in collusion, they cannot learn anything about how particular users answered submissions or discover correlations between groups of users. This property primarily benefits users, although the surveyor might benefi t from increased participation and reduced motivation to bias a response. The security property requires that only authorized users can complete a survey and that they can complete it at most once. This property primarily benefi ts surveyors, although users are also assured that their responses will not be lost in a deluge of unauthorized responses.

In “ANONIZE: A Large-Scale Anonymous Survey System,” we precisely defi ned ad hoc surveys’ security properties. 3 As we mentioned, we are interested in providing security not only for a single survey but also for att acks on many surveys—be they in the past, concurrent, or in the future. Toward this goal, we provide simple game-based security defi nitions and directly analyze our protocol’s security under concurrent executions.

In a game-based definition, there are two parties: a challenger who represents all honest parties and an adversary who represents all corrupted parties. Challengers and adversaries interact with one another according to the rules of the game; for instance, adversaries might be able to ask to register corrupted users and see the survey outputs generated by honest users of their choice. At some point, a challenger gives an adversary a challenge—for instance, an honestly generated survey response. At the end of the game, the adversary provides a response to this challenge; the adversary might guess which honest user generated the challenge. If this challenge response is correct, the adversary wins the game.

The definition of security states that for any realistic, time-bounded adversary, the probability of winning the game is very close to the probability of winning based on a random guess. Thus, a proof under this definition rules out all realistic attackers, provided the game accurately captures all actions that the adversary can make in the real world. The for malization of both the security definitions and the corresponding proofs requires care, especially for a system as complex as ad hoc surveys. surveys.

This is analogous to other cryptographic game-based definitions, such as blind signatures. 4 Although related notions of anonymity and authenticity have been defined in the literature for other applications, such as group signatures, ring signatures, and anonymous credentials, our setting is considerably more complex, and thus our definitions differ.

Privacy and Unlinkability

Our first important property is survey unlinkability. The SA and RA should not be able to link users to their survey responses by analyzing the protocol’s message traffic (separate measures ensure network-layer unlinkability). We require that this holds even if attackers register multiple identities and see submissions for chosen users in this or any other survey.

We introduce an adaptive notion of unlinkability in which survey responses remain unlinkable even if the adversaries (the SA and RA ) force other users to submit responses in ways that might help adversaries link other users, and they can do this at any point during the security experiment.

Roughly speaking, the privacy game starts when an adversary establishes the parameters for one RA. Two distinct honest users id 0 and id register with the adversarial RA. The adversary then outputs the public information for a “challenge survey” and receives the survey submissions for id 0 and id 1 1 in random order. The adversary wins if it can correctly guess which user formed which submission. The definition states that no realistic time-bounded adversary can win with probability much bett er than 1/2.

Security and Authenticity

Our second important property is authenticity— malicious users should not be able to submit responses unless they are authorized by the SA to participate in the survey. This property should hold when users arbitrarily create fake identities, fake surveys, and new surveys, which may be related in some form to the survey under attack. Moreover, this property ensures that potentially malicious users can complete such surveys only once. If they successfully submit multiple times, their submissions use the same token and can be easily identified, then joined or discarded, depending on the survey policy.

Roughly speaking, the security game starts when a challenger establishes the parameters for one RA and many SAs. An adversary can then generate new survey IDs for any SA, ask for any survey submission output by any honest user for chosen surveys, and register corrupted users. The challenge involves a new, honestly generated survey ID, but the adversary chooses both the list of participants and the SA. The adversary responds with a set of survey submissions, which the challenger checks against four conditions, determining whether

  • an adversary produced more submissions than allowed,
  • all submissions are valid,
  • all submissions have different token numbers, and
  • all token numbers are new and therefore created by the adversary.

Satisfying all the conditions results in the adversary winning. The definitions state that no realistic timebounded adversary can win with probability much better than 0.

Building ANONIZE

We constructed our system in two phases. First, we provided an abstract implementation of secure ad hoc surveys from generic primitives, such as commitment schemes, signature schemes, pseudorandom functions (PRFs), and generic noninteractive zero-knowledge (NIZK) arguments for all assertions that can be efficiently verified. A commitment scheme lets a sender commit to a message without revealing that message to a receiver. A signature scheme allows public authentication of a message. A PRF is a seeded deterministic function that maps any input to a randomlooking output, assuming one has no knowledge of the seed. Finally, an NIZK argument provides a proof of an assertion—for example, “I know a signature by the RA on message m”—without revealing anything beyond the truth of this statement, such as signature bits. We proved the abstract scheme’s security based on the assumption that all generic primitives employed are secure. We took explicit care to show that our schemes remain secure even when an adversary initiates many concurrently executing sessions in the system.

In the second phase, we showed that the generic scheme can be instantiated with a specific commitment scheme, signature scheme, PRF, and NIZK arguments to obtain our efficient, secure ad hoc survey scheme ANONIZE. Our system is based on specific computational assumptions related to the underlying primitives’ security. As in many other efficient cryptographic systems, our analysis treats the hash function— used as part of the NIZK argument—as an idealized random oracle.5

A surprising aspect of this second phase is that our generic protocol does not rely on the underlying primitives in a black-box way; rather, the NIZK argument is used to prove complex statements that require code of the actual commitments, signatures, and PRFs used. In this phase, we relied on ideas similar to those underlying efficient constructions of anonymous credentials in bilinear groups, 6 although our constructions differ in a few ways. As far as we know, our scheme is one of the first implementations of a complex cryptographic scheme that is concurrently secure.

An Abstract Construction

This high-level overview omits several important features but conveys the intuition of our abstract. It comprises three steps:

  • Registration. A user with identity id registers with the RA by sending a commitment to a random seedsid of a PRF F. If the user has not registered, the RA signs the user’s name along with the commitment. The returned signature is the user’s master user token.
  • Survey. To create a survey, an SA generates a new signing key and publishes a list of signed user identities along with the survey ID/verification key, vid .
  • Response. To complete survey ID vid , a user id generates a single-use token Fsid (vid) by evaluating the PRF on the seed sid with input vid, and then presents an NIZK argument that it “knows a signature by the RA on its identity id and a commitment to a seed sid ,” it “knows a signature by the SA on its id,” and the single use token is computed as Fsid(vid). The user’s actual survey data will be part of, and thereby authenticated by, this NIZK argument.

The NIZK proof in the survey completion step helps ensure that only authorized users can complete the survey and that they can compute at most one single-use token and thus complete the survey only once. If users want to replace their survey response before the deadline and the system allows this, they can create a new NIZK argument with new data for the same Fv() value. The old survey with this value can be deleted. id id 5 sid id Anonymity is supported by the fact that neither the RA nor the SA ever sees the seeds (only commitments to it), the NIZK arguments’ zero-knowledge property, and the PRF’s pseudorandomness property.

Proving this abstract protocol is secure is nontrivial. To guarantee security under concurrent executions, we introduce the notion of tag-based online simulationextractable NIZK, which is essentially equivalent to the notion of universally composable NIZK. 8

A Concrete Construction

To enable the second phase of our construction— instantiation of the abstract protocol using specific primitives—we demonstrate a simple and efficient way to implement online simulation-extractable NIZK arguments in the random oracle model. The key to the construction is choosing appropriate commitments, signatures, and PRFs that can be “stitched together” so that we can provide an efficient NIZK argument for the complex statement in the abstract protocol. This integration step is nontrivial; to produce efficient NIZK arguments and thereby an overall efficient system, we must look closely at each building block’s underlying algebraic structure to find primitives that can leverage a common algebraic structure.

Although we provide one method of implementing the abstract system, there are endless alternatives. We chose this concrete implementation based on its efficiency, simplicity, and our confidence in the underlying building blocks’ security.

Our ANONIZE construction is placed in a “bi linear map” algebraic setting, which is commonly believed to provide high security levels with low bandwidth and computationally efficient implementations. 9 These bilinear maps are typically implemented via an underlying elliptic curve. The common input for all protocols includes a description of the bilinear mapping, generators for the algebraic groups involved, and a description of a collision-resistant hash function. The bilinear map we chose is typically one of a handful of options from a given library. The elliptic curve library we used implements the hash function differently depending on the curve implementation. The RA can randomly choose the generators.

With these common settings established, our system uses the following building blocks: the Pedersen commitment scheme,10 the Dodis–Yampolskiy PRF11, and a simplified signature scheme derived from the Boneh–Boyen identity-based cryptosystem. 12 The final nontrivial step was to devise the efficient NIZK arguments for the statements we need to prove concerning these building blocks. For a deeper technical discussion, see “ANONIZE: A Large-Scale Anonymous Survey System.3

An Implementation and Experiments

ANONIZE can easily handle large numbers of users with moderate resources for the registration and survey authorities. The computational cost for users is quite low as well; a typical desktop can compute the worstcase scenario in less than a few seconds, using a single core of the machine. Thus, we argue our system scales well at affordable costs.

We tested our system by implementing it in C++11 using the MIRACL big number library, which supports pairing (that is, bilinear map)-based cryptography and is free for educational purposes. 13 Our implementation consisted of two curves using the Atepairing: a Barreto– Naehrig (BN) pairing-friendly curve that MIRACL equates to 128-bit security and a Barreto–Lynn–Scott (BLS) pairing-friendly curve that MIRACL equates to 256-bit security. Depending on the security assumption’s aggressiveness, we get either close to 128- and 256-bit security from these implementations, or we get 128-bit security from the 256-bit implementation, and the 128-bit implementation wouldn’t be secure—due to possible loss of security in the security proof. We performed all tests on a 3.06 GHz Intel Core 2 Duo, Late 2009 iMac with 12 Gbytes of 1,067 MHz DDR3 RAM and a 5,400 RPM SATA HD.

Our unoptimized implementation demonstrates efficiency for nearly all practical surveys. In particular, our implementation utilizes only one core of the CPU, but one can easily load-balance user registration and survey verification over multiple cores and machines by having all cores run the same processes. Similarly, when generating new surveys, we can split the participant list among several different cores at the SA, and each would sign the names of the individuals on its portion of the list.

Our results show that one or two workstations or servers are sufficient to manage millions of surveys using the more efficient BN curves, and a small number of high-performance machines could easily handle surveys of larger or similar sizes using the BLS curves. User-side computation is reasonably negligible. Submitting a survey and verifying a submitted survey—the most expensive operations a user might do—take seconds.

Experiments

Table 1 shows our experiment results. Each experiment was performed 100 times, with mean and standard deviation times reported in milliseconds. The measured times correspond to the time necessary to compute the appropriate cryptography and store the result to disk. No network measurements were involved. The most expensive operation was the mass verification of surveys that should be done by an SA.

In our BN implementation, we can verify 1 million submissions in approximately 33 hours per core. Assuming a reasonable four cores per system gives us a little over eight hours for one system; three systems could complete the process in approximately two hours. In the BLS setting, assuming we had to verify the submissions of 1 million people, we could use approximately 20 machines with four cores each and compute the results in less than eight hours.

If there is no need to keep the survey results private, this computing power can be rented from the cloud to lower costs. Verification does not require private information, so renting resources is less risky. Survey generation and the RA side of user registration can also centralize computing costs with an authority. Both are at least an order of magnitude less time intensive than survey verification and can be efficiently distributed over similar resources.

Storage and Bandwidth Requirements

Storage and bandwidth requirements are reasonable for such schemes. Each element in the survey list output during the survey registration is less than 1 Kbyte, as are users’ secret tokens. The most expensive NIZK argument used in the survey submission is smaller than 8 Kbytes. This excludes the IDs’ length, which is system dependent but on the order of a few hundred bytes at most.

The ability to run truly anonymous, ad hoc online surveys is a critical aspect of online security. Surveys in which participants are not anonymous might not elicit truthful responses. More important, we believe it is unethical to depend on trusted third parties, such as traditional online survey providers, to provide anonymity when dealing with sensitive survey data that participants supply under the assumption that they are anonymous. Recent attacks on major companies (for instance, Sony) have demonstrated that such solutions can be breached, and the anonymity they claim to provide is just an illusion. ANONIZE provides a solution to this problem without relying on any trusted third parties.

References

  1. Security Breach Leaves 45,000 at Risk of Identity Theft,” Cornell Daily Sun, 24 June 2009.
  2. D. Chaum and T.P. Pedersen, “Wallet Databases with Observers,” Advances in Cryptology, vol. 740, 1992, pp. 89–105.
  3. S. Hohenberger et al., “ANONIZE: A Large-Scale Anonymous Survey System,” IEEE Symp. Security and Privacy, 2014, pp. 375–389.
  4. A. Juels, M. Luby, and R. Ostrovsky, “Security of Blind Digital Signatures (Extended Abstract),” Advances in Cryptology, 1997, pp. 150–164.
  5. M. Bellare and P. Rogaway, “Random Oracles Are Practical: A Paradigm for Designing Efficient Protocols,” Proc. 1st ACM Conf. Computer and Comm. Security (CCS 03), 1993, pp. 62–73.
  6. J. Camenisch and A. Lysyanskaya, “Signature Schemes and Anonymous Credentials from Bilinear Maps,” Advances in Cryptology, 2004, pp. 56–72.
  7. O. Goldreich, The Foundations of Cryptography, Cambridge Univ., 2001.
  8. D. Santis et al., “Robust Non-interactive Zero Knowledge,” SIAM J. Computing, vol. 20, 2001, pp. 1084–1118.
  9. D. Boneh and M.K. Franklin, “Identity-Based Encryption from the Weil Pairing,” Advances in Cryptology, LNCS 2139, 2001, pp. 213–229.
  10. T.P. Pedersen, “Non-interactive and Information-Theoretic Secure Verifiable Secret Sharing,” Advances in Cryptology, 1991, pp. 129–140.
  11. Y. Dodis and A. Yampolskiy, “A Verifiable Random Function with Short Proofs and Keys,” Public Key Cryptography, LNCS 3386, 2005, pp. 416–431.
  12. D. Boneh and X. Boyen, “Efficient Selective-ID Secure Identity-Based Encryption without Random Oracles,” Advances in Cryptology, LNCS 3027, 2004, pp. 223–238.
  13. M. Scott, “Multiprecision Integer and Rational Arithmetic C/C++ Library (MIRACL),” Shamus Software, 2014.

About the Authors

Susan Hohenberger is an associate research professor in the Department of Computer Science at Johns Hopkins University. Hohenberger received a PhD in computer science from the Massachusetts Institute of Technology. Her research on practical cryptographic systems received an NSF Career Award, a Google Research Award, and a Microsoft Faculty Fellowship. Contact her at susan@cs.jhu.edu.

Steven Myers is an associate professor in the Department of Informatics and Computing at Indiana University and director of the academic security programs. His research interests include theoretical and applied cryptographic systems, phishing, and understanding emergent effects on cybersecurity. Myers received a PhD in computer science from the University of Toronto. Contact him at samyers@ indiana.edu.

Rafael Pass is an associate professor of computer science at Cornell University and Cornell NYC Tech. His research interests include cryptography and its interplay with computational complexity and game theory. Pass received a PhD in computer science from the Massachusetts Institute of Technology. He’s a recipient of the NSF Career Award, the AFOSR Young Investigator Award, the Alfred P. Sloan Fellowship, and the Microsoft Faculty Award. Contact him at rafael@cs.cornell.edu.

Abhi Shelat is an associate professor of computer science at the University of Virginia. shelat received a PhD in computer science from the Massachusetts Institute of Technology. He received the NSF Career award, the Virginia FEST Distinguished Young Investigator prize, a Microsoft Research Faculty Fellowship, and an SAIC Scholars Research Award. Contact him at abhi@virginia.edu.

 

This article first appeared in IEEE Security & Privacy magazine. IEEE Security & Privacy offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT