More Details on the Android JCA PRNG Flaw

I've spent a couple days reading the source code for the Pseudo Random number generators in Android mostly because there aren't many breakdowns of the vulnerability around, none that walk through the code explicitly anyway. After some discussion with some people from the Android Security Discussion Google Group I realized that the issue goes a little deeper than  just the super calls and constructor definition as I previously thought.

I was also mislead by grepcode---the site I was using to read the code---since it it wasn't directing me to the Android SecureRandom Implementation but rather OpenJDK!

So I thought I'd correct myself re-post about the issue and study the code directly from the Android repo namely ( )

Anyway Here's the official Android Secure Random constructor from the implementation in the Jelly Bean Release:

  89 public SecureRandom() {
  90    super(0);
  91    Services.refresh();
  92   Provider.Service service = Services.getSecureRandomService();
  93    if (service == null) {
  94        this.provider = null;
  95        this.secureRandomSpi = new SHA1PRNG_SecureRandomImpl();
  96        this.algorithm = "SHA1PRNG";
  97    } else {
  98        try {
  99            this.provider = service.getProvider();
  100           this.secureRandomSpi = (SecureRandomSpi)service.newInstance(null);
  101           this.algorithm = service.getAlgorithm();
  102       } catch (Exception e) {
  103           throw new RuntimeException(e);
  104       }
  105   }
extract from

The code from this constructor is important, specifically because the flaw in the implementation comes directly from how the PRNG is seeded implicitly, namely when the developer does not provide a seed via the constructor, and uses the default constructor's mechanism to seed it. This constructor could branch off into one of three directions each potentially seeding it differently (as far as we know from the source code quoted above). To start off with lets explore what happens when, lines 91 and 92 are executed, namely when the provider is sourced from the


call. Well this method called getSecureRandomService is actually pretty simple it literally has 3 lines of code to it namely:

     * Returns the default SecureRandom service description.
     * Call refresh() before.
    public static Provider.Service getSecureRandomService() {
        return secureRandom;

All it does is return some class field called SecureRandom which is set up using the call in line 91 of SecureRandom's constructor, which is Services.refresh(), and this actually sets up a "cached" version of the SecureRandom provider. The Android folks probably wanted to void searching the list of available Providers by making sure one was cached ready to be returned when this method is invoked. As far as I know this call defaults to the Apache Harmony Secure Random Provider, we can see is being invoked explicitly in the true clause of the if statement after line 93. Back to the line 92 now, if everything worked fine in lines 92 and 91 we should have returned the Apache Harmony implementation of the SecureRandom provider, and the field called services would not be null.

This would drop us into line 97, the else class of the if statement, where the Providers metadata is set up and a call to
which returns the Service Provider Interface (Spi) that allows us to make the important calls with regards to generating seeds and grabbing some secure pseudo random numbers. This call works as follows:

public Object newInstance(Object constructorParameter) throws NoSuchAlgorithmException {
            if (implementation == null || !className.equals(lastClassName)) {
                ClassLoader cl = provider.getClass().getClassLoader();
                if (cl == null) {
                    cl = ClassLoader.getSystemClassLoader();
                try {
                    implementation = Class.forName(className, true, cl);
                    lastClassName = className;
                } catch (Exception e) {
                    throw new NoSuchAlgorithmException(type + " " + 
algorithm + " implementation not found: " + e);
            if (constructorParameter == null) {
                try {
                    return implementation.newInstance();
                } catch (Exception e) {
                    throw new NoSuchAlgorithmException(
                            type + " " + algorithm + " implementation not found", e);

What this means for the Android PRNG is that because of the way the constructor of Secure Random is set up, namely it makes a call to newInstance will a null argument, which will mean the constructor for


and return a Service Provider Instance wrapping an instance of this class, which is the actual provider. So then what happens when SHA1PRNG_SecureRandomImpl is invoked via the default construcutor? Here's the code for that:

 public SHA1PRNG_SecureRandomImpl() {

        seed = new int[HASH_OFFSET + EXTRAFRAME_OFFSET];
        seed[HASH_OFFSET] = H0;
        seed[HASH_OFFSET + 1] = H1;
        seed[HASH_OFFSET + 2] = H2;
        seed[HASH_OFFSET + 3] = H3;
        seed[HASH_OFFSET + 4] = H4;

        seedLength = 0;
        copies = new int[2 * FRAME_LENGTH + EXTRAFRAME_OFFSET];
        nextBytes = new byte[DIGEST_LENGTH];
        nextBIndex = HASHBYTES_TO_USE;
        counter = COUNTER_BASE;
        state = UNDEFINED;
extract from

All this does is set up some pointers that de-reference offsets inside the seed buffer and fills the buffer with some initialzation data, specific to the SHA1 algorithm, no problems here so far. Also the seed array would have the SHA1 initialization data in the last couple of addressed of the array, this data will be used as an internal state with which to generate Pseudo Random Data. How Pseudo Random the output of the PRNG is depends completely on how it is seeded initially.

Now according to how you would use SecureRandom, once you've setup an instance of SecureRandom via the default constructor you would then call SecureRandom.nextBytes(), which in turn would call the SHA1PRNG_SecureRandomImple.engineNextBytes(byte[]) method, which in essence handles seeding the PRNG if it was not explicitly seeded, here's how it does that:

 protected synchronized void engineNextBytes(byte[] bytes) {

        int i, n;

        long bits; // number of bits required by Secure Hash Standard
        int nextByteToReturn; // index of ready bytes in "bytes" array
        int lastWord; // index of last word in frame containing bytes
        final int extrabytes = 7;// # of bytes to add in order to computer # of 8 byte words

        if (bytes == null) {
            throw new NullPointerException("bytes == null");

        lastWord = seed[BYTES_OFFSET] == 0 ? 0
                : (seed[BYTES_OFFSET] + extrabytes) >> 3 - 1;

        if (state == UNDEFINED) {

            // no seed supplied by user, hence it is generated thus randomizing internal state
            nextBIndex = HASHBYTES_TO_USE;

extract from

We can see that if the state of the engine is UNDEFINED it will proceed to seed the PRNG using RandomBitsSupplier.getRandomBits(DIGEST_LENGTH) where DIGEST_LENGTH is defined as 20, which is the block length of SHA1 in bytes. Here's how the seeding works; the getRandomBits calls a method called getUnixDeviceRandom(int) which looks like this:

  private static synchronized byte[] getUnixDeviceRandom(int numBytes) {

        byte[] bytes = new byte[numBytes];

        int total = 0;
        int bytesRead;
        int offset = 0;
        try {
            for ( ; ; ) {

                bytesRead =, offset, numBytes-total);

                // the below case should not occur because /dev/random or /dev/urandom is a special file
                // hence, if it is happened there is some internal problem
                if ( bytesRead == -1 ) {
                    throw new ProviderException("bytesRead == -1");

                total  += bytesRead;
                offset += bytesRead;

                if ( total >= numBytes ) {
        } catch (IOException e) {

            // actually there should be no IOException because device is a special file;
            // hence, there is either some internal problem or, for instance,
            // device was removed in runtime, or something else
            throw new ProviderException("ATTENTION: IOException in RandomBitsSupplier.getLinuxRandomBits(): " + e);
        return bytes;
extract from

This byte array---as the previously quoted code indicates---is passed to updateSeed, which does the following:

 private void updateSeed(byte[] bytes) {

        // on call:   "seed" contains current bytes and current hash;
        // on return: "seed" contains new current bytes and possibly new current hash
        //            if after adding, seed bytes overfill its buffer
        SHA1Impl.updateHash(seed, bytes, 0, bytes.length - 1);

        seedLength += bytes.length;
In summary SHA1Impl.updateHash the bytes pooled from /dev/urandom are fed into a hashing operation and saved in the seed[] array as the input to the next round of pseudo random byte generation.

This next part is where I believe the Android PRNG goes wrong, it involves how the counter is updated. The counter is stored in the seed array and serves to help insure the deterministic nature of a hashing algorithm like SHA1 does not negatively influence the pseudo random nature of the output. Ironically the way it writes the counter into the seed array actually destroys some of the psuedo random data pulled from /dev/urandom. The first 5 bytes of the seed array hold the hash of the /dev/urandom sampled data, but when the counter is updated it calculates the position in the seed array using a value in the seed array called seed[BYTES_OFFSET] which will be calculated as 0 after the initial sampling from /dev/uranom which means the counter data overwrites some of the actual seed used in proceeding hashing operations. Here's some of the problem code:

 lastWord = seed[BYTES_OFFSET] == 0 ? 0
                : (seed[BYTES_OFFSET] + extrabytes) >> 3 - 1;

As we can see lastWord will be calculated as 0 meaning that the following code, further down in SHA1PRNG_SecureRandomImpl:

        n = seed[BYTES_OFFSET] & 0x03;
        for (;;) {
            if (n == 0) {

                seed[lastWord] = (int) (counter >>> 32);
                seed[lastWord + 1] = (int) (counter & 0xFFFFFFFF);
                seed[lastWord + 2] = END_FLAGS[0];

            } else {

                seed[lastWord] |= (int) ((counter >>> RIGHT1[n]) & MASK[n]);
                seed[lastWord + 1] = (int) ((counter >>> RIGHT2[n]) & 0xFFFFFFFF);
                seed[lastWord + 2] = (int) ((counter << LEFT[n]) | END_FLAGS[n]);
extract from lines 349-361

will overwrite some of the /dev/urandom sampled data, which will cause consequtive calls to SecureRandom.nextBytes to generate pseudo random data from a deterministic internal state! Here's a screen shot of how the seed value or internal state converges, you should see that after just 3 calls, the seed converges to only having 64 bits of true pseudo random data:

 Each column is a separate run of the SHA1PRNG implementation currently on Android, the tests were done on an Ubuntu 12.04 machine, though this has little or no effect on the affect of the flaw. This printout shows the first 8 bytes of the seed array in SHA1PRNG_SecureRandomImpl.


  1. Hi, thanks to your post. It is clear.
    The possible solution might be re-calculating the lastWord variable after updateSeed(RandomBitsSupplier.getRandomBits(DIGEST_LENGTH)); since seed[BYTES_OFFSET] is updated.
    But I have a few questions about calculating the variable lastWord.
    1. In Java operator precedence, >> is lower than +/-. Therefore (...) >> 3 - 1 is equal to (...) >> 2. I am not sure if this is what it means to do.
    2. Why extrabytes is used here? At first I assumed that a word is 8-byte (as the comment after declaring extrabytes suggests). Thus, (num_bytes + 7) / 8 - 1 will be the index of last word. Only the previous operator precedence problem here. But it seems that a word here is 4-byte and the lastWord is the index of seed array.
    I think the correct implementation should be lastWord = seed[BYTES_OFFSET] >> 2. Am I right?


    1. Hay thanks for your comment,

      1. yeah uhm that "- 1" seems a bit iffy to me its typically something a developer will leave in some code after trying to hack it into a functioning condition--brute-forcing a working solution. So I'm not sure why its calculated this way :/ I would have simply left this as ">> 2", actually if you have a look at the Provider Implementation for the SHA Message Digest you'll see it calculated as you suggested :)

      lastWord = (buffer[BYTES_OFFSET] + 3)>>2 ; // computing of # of full words by shifting
      // # of bytes
      from [line 86-87]

      2. I think "extrabytes" are added to compensate for a padding frame, though other instances of this calculation are done quite differently including the example above, in the SHAImpl, it calculates the bytes offset as follows:


      int index = intArray[BYTES_OFFSET];
      int i = fromByte;
      int maxWord;
      int nBytes;

      int wordIndex = index >>2;
      int byteIndex = index & 0x03;
      from [lines 174 - 180]

      I honestly am not sure but I think it has something to do with compensating for a frame that holds other counter data. I read this code to find out how the PRNG corrupts /dev/urandom output, I haven't studied a lot of its other properties.

    2. Thanks for your prompt reply.

      After studying the lines of filling counter and END_FLAGS (in for(;;){} ), I think lastWord means the position of seed array of first not fully-filled element.
      For example, if we have filled 3 bytes into seed, I think lastWord should be 0 so that the line
      seed[lastWord] |= (int) ((counter >>> RIGHT1[n]) & MASK[n]);
      can make sense.
      seed[BYTES_OFFSET] = 0 => lastWord = 0
      seed[BYTES_OFFSET] = 1 => lastWord = 0
      seed[BYTES_OFFSET] = 2 => lastWord = 0
      seed[BYTES_OFFSET] = 3 => lastWord = 0
      seed[BYTES_OFFSET] = 4 => lastWord = 1
      that is why I think lastWord = seed[BYTES_OFFSET] >> 2 without adding extrabytes before shifting.
      Please correct me if I'm wrong.


    3. Gimmie sometime to read through the implementation a bit more, I would love to agree with you but right now I'm not completely sure. Although based on the way they calculate it in other instances I think you may be on the right track ;)

  2. This comment has been removed by the author.

  3. Hi,

    I am trying to replicate the random number generator problem.

    Basically I am doing three tests:

    1) Using diferent random instances:

    SecureRandom random = new SecureRandom();
    byte rnd1[] = new byte[8];

    for(int i=0; i< 10000; i ){
    random = new SecureRandom();

    2) Using the same random instance:

    SecureRandom random = new SecureRandom();
    byte rnd1[] = new byte[8];

    for(int i=0; i< 10000; i ){

    3) Test output with the same seed:

    byte rndbytes1[] = new byte[4];
    byte rndbytes2[] = new byte[4];
    SecureRandom random = new SecureRandom("123456790".getBytes());



    Test 1: All output values are different.
    Test 2: All output values are different.
    Test 3: rnd1 and rnd2 are different.

    Can anyone explain what I am doing wrong ?

    Thanks you.

    1. Hay Hector,

      I actually still have the code I used to detail study this vulnerability I'll post it on my github page first thing tomorrow, hopefully this will show you the same things I noticed, I'll post a link to the code in a follow up comment.

      Though I what I can say about your approach is that you may be looking for the wrong thing, you may also be looking for it in the wrong way. Well not "wrong" but it would require a lot of work to detail the flaw using this kind of approach, namely to inspect and compare the output of the PRNG.

      The problem stems from how the "true" psuedo-randomness used to seed the PRNG *implicitly* is corrupted--by this I mean when the algorithm asks for X bits of randomness but it actually only operates on X-Y bits of randomness where Y is just too large to be ignored. Psuedo randomness here is an input to the PRNG sampled from /dev/urandom when not seeded implicitly.

      What this would cause is certain subsets (of the bits) of psuedo random seed would be predictable because of how they are corrupted--namely with counter values, it also violates the requirements for a cryptographically secure PRNG we need .i.e "X amount of psuedo-random output but only X-Y reliable psuedo-random bits are being used".

      The way I tested the code was to look at how the psuedo-random seed is corrupted, and whether multiple runs of the algorithm would end up using the same seed or would have large portions of the seed in common--the screenshot in the blogpost shows the seed value and how it changes after successive calls--when seeded implicitly. What this would mean?, given that the PRNG uses SHA to compute its output; if you know what the seed is or is going to be for the digest (SHA-1) computation you can then calculate the next seed, or large enough amounts of it to determine what the next digest would be. Simply seeding it and checking the output would not reliably show you the correlation especially if you're not inspecting the individual bits, you may be looking at the characters in the digest to see if they are "verbatim" the same but what you may miss that certain bit patterns appear for instance what does the character "a" and "z" have in common? They aren't *exactly* the same but their bit patterns are terribly similar! I'm not sure how you are checking the seed for similarity but you're simply inspecting the characters and checking whether they are the same you miss a lot of detail.

      Some advice: Try looking at how "much" of the bits are the same, look at whether you can reliably determine the probability of a certain bit in the PRNG output being a certain value, namely what is the probability (based on previous outputs) that a certain bit will be 0 or 1? it if its significantly greater than 50% probably closed to 60% then you have what's called a biased PRNG, this means its not cryptographically secure at all!

      I'll push the code tomorrow, and talk about what you should be noticing and why its bad.


    2. Hi,

      Thanks for your reply. Reading against the random number generator weakness I found
      that there are two different java implementations. "java.util.Random" and

      I did not realize it. I was trying to replicate the problem on the following website:
      but is a different weakness (a previous one). [January, 2013] For this reason I expected
      to have the same outputs because in the above website the "r1" value is the same.
      But how is possible that the PRNG repeat the output (r1) ?

      In the other hand, now I can see that the weakness that you describe here is
      different. Firstly because affect to "". Secondly
      because even using the same "seed" (thanks to /dev/urandom) the PRNG give
      different outputs, but is not as good as it should be.
      I got confused because a new bitconins problem occurs this month. [August, 2013]
      In this case no PRNG outputs were repeated but collecting enough messages
      they could obtain the private key. How ? Brute force ? This was because the
      seed problem related in this post ?

      I would link all the PRNG problems and causes from the beginning.

      Thanks you

    3. For the flaw described in this blogpost, you can confirm the entropy is reduced by doing this:
      Replace the line
      Streams.readFully(devURandom, result, 0, byteCount);
      with some hardcoded byte array (like byte[] result = new byte[]{0, 1, 2, 3,..., 19};).
      If you can produce the same random number after changing some element of the byte array, I think it is a proof that the entropy is reduced to fewer bits (64bits in this case).


Post a Comment