Hi... 5 days before the October test and I've looked through most of the practice (old exam) material...

I'm completely stunned.

The first test I went through was 9677 and I felt comfortable with the content and felt I could get most of it correct minus a bunch of silly mistakes and other errors. Seeing that you needed a raw score of only 56 for a 900 and 67 for a 990, I felt pretty confident I'd do fine.

Then I looked at test 1077... I was blown away. Those problems seemed even more difficult, with far more of those classic ETS evasive booby trap answers and a ton of questions based on random memorized facts. Since this test is the one most students say resembles the actual exam in recent years, I really started worrying... Furthermore, seeing that WE NEEDED A RAW SCORE OF 73 for a 900 and 85 for a 990 on that version of the test, I'm starting to panic.

I am totally befuddled over how they score these tests. On test 9677, I'd finish a problem that seemed mundane and routine, and yet the data in the back would say that like only 32% of the students got it right and I was thinking "wow I'm gonna slam dunk this test", and then on 1077 I'd finish a problem that was arduous and confusing, half the time get it wrong, and then find in the back that like 70% of the examinees got that one correct. I just don't understand.

Here's just a few examples of what I'm talking about...

1) In 9677, there was a question (45) asking which of the following 5 circuits is a high pass filter...

In 1077, there was a nearly identical analogous question (39), except it gave us FOUR CIRCUITS AND ASKED US WHICH TWO are high pass filters...

In both cases, 45% of the examinees got it correct. How could that be?

2) One practice test asked us to find the minimum diamter lens needed to resolve an image, but because the answer choices differed by orders of magnitude, it didn't matter whether we used Lamda/D or 1.22Lamda/D or 2.44Lamda/D...

There was a nearly identical question on 1077 (13) but in this case the answer choices were so close together that we had to remember the exact rayleigh criterion formula 1.22Lamda/D otherwise we would not get the correct answer.

3) Finally, there was an error analysis question, (16), that not only required us to do a tedious computation of average error or standard deviation (I don't know which they wanted), the answer did not seem to come from any of the methods I learned in my statistics class... someone told me the solution was based on some specific property of "Poisson processes", but to me, requiring us to know that is as unreasonable as asking a "Who did that experiment and on what day of the week?" question on the GRE.

On the other tests, when I got an answer wrong, my mistake became clear after I reviewed the problem more carefully, but on 1077, I'm still clueless over how I'm supposed to do some of the ones I got wrong.

Has anyone else noticed drastic differences among the practice tests. I haven't "taken" the practice tests in a timed setting, but for those of you who have, have your scaled scores been consistent or unpredictable? For those of you who took the real test, did you feel the practice tests prepared you adequately, and did your real score agree with your practice test scores?

Sorry for the length, I was trying to get my point across clearly with supporting facts.

Regards

## Bewildered Over GRE Practice Tests

- butsurigakusha
**Posts:**293**Joined:**Sun Oct 07, 2007 8:05 pm

About the error analysis problem, I was able to solve it quite simply just by remembering something I learned in my first-year laboratory class, that the error in measuring random events, such as radioactivity, is sqrt(N), so if you wanted to measure the decay rate, and you counted 100 events in a minute, the uncertainty would be sqrt(100)=10. So for uncertainty of 1 percent, sqrt(N)/N=1/100 gives N=1000. And looking at the data given, one can pretty quickly add up the numbers and divide by 10 to get the average number of counts per one second interval, which turns out to be 2. So, we would need 1000/2=5000 seconds in order to get 10000 counts.

In sum, just remember that the uncertainty in random counting is sqrt(N). A similar problem is found on exam 8677, number 40. So there is a good chance that a similar problem will be on the exam that you take, and if it is like 40 on 8677, then it will be basically a free point that requires almost no time.

In sum, just remember that the uncertainty in random counting is sqrt(N). A similar problem is found on exam 8677, number 40. So there is a good chance that a similar problem will be on the exam that you take, and if it is like 40 on 8677, then it will be basically a free point that requires almost no time.

- butsurigakusha
**Posts:**293**Joined:**Sun Oct 07, 2007 8:05 pm

When I first saw that question, I tried to apply the central limit theorem and just divide the sample standard deviation by sqrt(N).

But in the case of radioactivity and poisson processes in general, I see you need to take sqrt(sample mean).

I never learned that (as far as I remember) in my courses, so originally I thought that practice question, being based on a random memorized fact, was not appropriate for an exam. However, since exponential decay processes are everywhere in physics, I guess it is isn't so bad for them to expect us to remember and understand thatt concept....

Nevertheless, I did see (at least) one question on Saturday's exam that really stuck out to me as being based on a random memorized expression.

Anyone who recalled that simple expression could answer the question in 10 seconds. But anyone who didn't would have no hope of deriving it in any reasonable amount of time during the exam. I know we're not supposed to reveal details and I won't, but I'm hoping they someday release the October exam I took so I can discuss it with other people.

But for those of you taking the Nov test, don't worry, the "random" questions as I call them are few and far between. Most questions simply require clever thinking with concepts everybody remembers. Though of course, there are a few of those completely off the wall problems like some on the practice tests that you read and think "How do they come up with stuff like this?" Haha!

Looking back at my paranoid posts from before my exam is a great feeling of relief. Many students think the GRE requires use of too many random formulas that we shouldn't be expected to have memorized. However, I noticed that nearly all of the formulas needed are part of the specific topics listed on the website. For instance, doppler effect, bohr model and compton scattering formulas. These formulas are quite random but ETS is basically telling us they're going to be tested by posting them on the website.

Earlier on this post I was angry about some of the radioactive decay problems that showed up on the 0177 test, thinking their solutions required obscure equations we shouldn't have to know...

However, I realized the website lists "counting statistics" as a topic. Originally when I reviewed that page I skimmed over counting statistics thinking, "Well, I know how to count and I know some statistics so I'm fine here." But I looked it up and it turns out counting statistics refers specifically to the statistics of radioactive decay and measurement. Thus if I had reviewed some details of that subject I should've been prepared for those obscure questions. The point is, read those topics!