Softball

Well. It seems to bother you. If it doesn't, don''t respond. Incidentally, your facts are wrong.

Let's see if I can explain a concept. If you wish to compare groups A and B, and you have no direct comparison, you look for some way to compare them. Conveniently, both A and B have played C and D. So, how do you compare?

First, let's see if we can recognize a simple fact. Within A, we can see the following:

1) hopefully, they all play each other in order to get a complete picture,
2) but, we can see that it makes a difference where the games are played. Within A, if you play at home, you tend to win. If you play on the road, you tend to lose. So, in order to try to get some balance within the members of A, you not only need to play every team, but you also need to play at home and on the road against every team.

Unfortunately, while this is done in professional baseball (except for interdivision and interleague play), there is simply no way that you will get every team in A to play every other team in A at home and on the road. But, you get as much information as you can, realizing that you aren't even able to be accurate in your assessment of the simple membership of group A without getting complete information on those factors that appear to make a difference.

But, now you wish to utilize information derived from comparisons against C and D. You begin with the information that it makes a difference where you play. BUT, when playing C and D, you make every effort to play all of your games at home?

Just from what you know about comparisons within Group A, it is totally invalid to try to get anything valid if you only play at home. It makes your information useless, and your comparisons absurd. You want to have A ranked higher than B when A only plays at home while B plays mostly on the road? And you don't see your problem?

Mentally, you aren't on third base with this. You aren't even out of the starting gate until you achieve some way of comparing A and B when they are using different parameters in their operations vs C and D.

Now, if you don't get the picture, that's why you need to look at non-conference schedules rather than intraleague games when trying to compare A to B. Your intraleague games are meaningless as comparison tools.

Your standard diversion tactic when you cannot defend the facts. You want to focus on non-conference road games and disregard conference road games. Hog wash!! Per the attached link RPI has long calculated a Road RPI for all teams and 13 of the top Road RPI teams are from the SEC (8) and Pac 12 (5). The exceptions are OU, FSU and Baylor.

So the system does address your complaint about road wins along with the quality of those road wins. Whether those road games are conference games or non-conference games is irrelevant. What is relevant is how good were those road teams.

It should be noted that Hofstra (14) would have made the top 16 in road RPI but were not seeded for the NCAA tourney nor are they a Power 5 school so I excluded them.

Look further at the link and it will also note that OU played the 20th strongest SOS and had the 19th strongest opponent's SOS. That is their demise in the seedings.

https://extra.ncaa.org/solutions/rpi/Stats Library/SB Team Rankings Through 5-13-2018.pdf
 
Last edited:
It doesn’t seem to me that a four seed vs. a three RPI is a “demise” I tend away from the histrionic.
 
Your standard diversion tactic when you cannot defend the facts. You want to focus on non-conference road games and disregard conference road games. Hog wash!! Per the attached link RPI has long calculated a Road RPI for all teams and 13 of the top Road RPI teams are from the SEC (8) and Pac 12 (5). The exceptions are OU, FSU and Baylor.

So the system does address your complaint about road wins along with the quality of those road wins. Whether those road games are conference games or non-conference games is irrelevant. What is relevant is how good were those road teams.

It should be noted that Hofstra (14) would have made the top 16 in road RPI but were not seeded for the NCAA tourney nor are they a Power 5 school so I excluded them.

Look further at the link and it will also note that OU played the 20th strongest SOS and had the 19th strongest opponent's SOS. That is their demise in the seedings.

https://extra.ncaa.org/solutions/rpi/Stats Library/SB Team Rankings Through 5-13-2018.pdf

You have no idea about how to test an idea? Really? YES. When trying to develop a comparison between Group A and Group B, you have to deal only with data that allow a valid comparison. You must get a complete and valid picture of Group A internally and Group B internally.

It is totally invalid to arrive at an evaluation of A or B if you use different criteria for the comparison. You must try to equalize it. Thus, we use only non-conference data to get how A and B respond to equivalent situations.
 
You have no idea about how to test an idea? Really? YES. When trying to develop a comparison between Group A and Group B, you have to deal only with data that allow a valid comparison. You must get a complete and valid picture of Group A internally and Group B internally.

It is totally invalid to arrive at an evaluation of A or B if you use different criteria for the comparison. You must try to equalize it. Thus, we use only non-conference data to get how A and B respond to equivalent situations.

Utter stupidity to conclude that non-conference games are the only criteria for comparing teams. In softball you are excluding close to half of the games in many cases against the best competition. In football you would exclude 65-75% of the games played.

Trying to apply a scientific technique to athletic contests is idiocy. First using your comparison of non-conference opponents is flawed because each team is playing a different non-conference schedule. There is virtually no equivalency and too much important data is excluded. Hint a different non-conference schedule is exactly as flawed just as a different conference schedules. Surprised with your scientific background you fail to recognize the lack of commonality when comparing non-conference schedules.

Athletic schedules virtually prohibit the ability to create equivalent situations hence the advent of the ranking systems both subjective (polls) and quantitative (RPI, Sagarin, etc.) all of which have their shortcomings.

Like it, love it or hate it. The participants compared to their seedings in the WCWS over the last decade strongly illustrate the effectiveness the RPI system has had in the seeding process. But hopefully they will continue to tweak the system on an ongoing basis to make it even more effective.
 
Utter stupidity to conclude that non-conference games are the only criteria for comparing teams. In softball you are excluding close to half of the games in many cases against the best competition. In football you would exclude 65-75% of the games played.

Trying to apply a scientific technique to athletic contests is idiocy. First using your comparison of non-conference opponents is flawed because each team is playing a different non-conference schedule. There is virtually no equivalency and too much important data is excluded. Hint a different non-conference schedule is exactly as flawed just as a different conference schedules. Surprised with your scientific background you fail to recognize the lack of commonality when comparing non-conference schedules.

Athletic schedules virtually prohibit the ability to create equivalent situations hence the advent of the ranking systems both subjective (polls) and quantitative (RPI, Sagarin, etc.) all of which have their shortcomings.

Like it, love it or hate it. The participants compared to their seedings in the WCWS over the last decade strongly illustrate the effectiveness the RPI system has had in the seeding process. But hopefully they will continue to tweak the system on an ongoing basis to make it even more effective.
Did you not have even one course in statistics, science, or anything that dealt with how to compare things?

If you want to compare A to B, you must compare it on the same basis.

You can't set up one animal on one diet, and different one on another, but then have them all in different environments? You control all variables that you can with the exception of the ones that you wish to test.

You can't compare conference A to conference B by mixing variables. You can't include games against conference A when that is what you are trying to compare. You invalidate the data, compromise the comparison.
 
Did you not have even one course in statistics, science, or anything that dealt with how to compare things?

If you want to compare A to B, you must compare it on the same basis.

You can't set up one animal on one diet, and different one on another, but then have them all in different environments? You control all variables that you can with the exception of the ones that you wish to test.

You can't compare conference A to conference B by mixing variables. You can't include games against conference A when that is what you are trying to compare. You invalidate the data, compromise the comparison.

Yes, I have a masters in statistics and I know what you are attempt to compare is an invalid comparison. Comparing OU playing Oregon to Florida playing FSU is no more significant in determining which team should be seeded than comparing Florida playing Georgia to OU playing OSU. Also not all non-conference teams have an equal probability of playing a particular team for a multitude of reasons.

All your approach would do is compare non-conference scheduling and the system already has a non-conference RPI calculation. Check the link below. It shows that 11 of the top 16 non-conference RPI's are from the SEC (6) and Pac 12 (5). OU, FSU, Baylor, Ohio State and Long Beach State are the other 5.

To legitimately compare teams for seeding purposes you must compare the entire season and that means all 50+ games. You cannot isolate on the non-conference schedule and totally disregard the 27 conference games against top 25 teams Florida played and focus only on their 6 non-conference games against top 25 teams. OU's 21 conference games and Florida's 27 conference games matter significantly in ranking the best teams. To use that logic in the name of being scientific makes no sense.

It is merely your standard diversion tactic to avoid discussing the relative completeness of the RPI approach.

What we do know about RPI is:

1. They weight home, road and neutral wins and losses to reflect location.
2. They factor in wins against teams 1-25
3. They factor in wins against teams 26-50, 1-100 and 1-150
4. They consider only Division I opponents
5. They track and factor in the team road record
6. They track the last 10 games and there is some question if use still used
7. They factor in Division I winning percentage
8. They factor in SOS
9. They factor in opponent's SOS
10. They track and factor in road success
11. They calculate a road RPI
12. They calculate a normal RPI
13. They calculate a non-conference RPI
14. They calculate a conference RPI
15. They calculate an adjusted RPI

Now what do you want included that is missing. It definitely is not non-conference comparison as is illustrated by the systems non-conference RPI, Road RPI and their respective records.

The committee takes all the above data and its RPI ranking then puts their "eye test" on the results to derive their seedings. And you suggest disregarding the above wealth of information to do a controlled scientific comparison of non-conference games to make team comparisons in the name of equivalency.

Foolery! The RPI is making their quantitative analysis by considering not some but all of a team's performance data to make their team comparisons and seedings. Far more complete than only looking at a subset of the data.

Want to tweak the system to give non-conference games more weight? A legitimate desire for you? Want more transparency on system calculations? A legitimate desire for me. Want more weighting for team W% and less weight for SOS? A legitimate desire for me. But neither of us have enough date to substantiate the need for the desired changes. It is what it is. A valueless opinion of both.
 
You keep referring to a system that has not been established to be valid, as though continuing to state it would make it valid.

Let me try to explain it one last time. Let's say:

1) we want to establish the effect of a medication on rats
2) we must administer the medication in the same way to every rat
3) we must have every rat maintained in the same exact manner throughout the procedure: diet, temperature, lighting, etc.
4) we must have a control group that does not receive the medication, but which undergoes every other aspect of the experiment exactly the same as the rat that undergoes the medication.

In this way, and only in this way, can you define the effect of a medication. You must have a control population. This is also the reason that human experimentation is difficult. Most don't want to be treated as test subjects with every aspect of their lives controlled.

You test one thing at a time.

Now, you want to compare conferences? You have to have a controlled environment. Since that isn't possible, you reduce the variables as much as you can.

You can't compare conferences while using data within a conference as a part of the comparison. You compare how they perform against others, not against others and within at the same time.

If you didn't learn this, how could you perform statistics? You just keep repeating the same mantra--that you must include rpi against conference teams. It is the rpi that is in question. Can you get that into your mind? It is the validity of the rpi that is in question.

If we get the right questions answered, we may find that the SEC is stronger. But, we can't unless we can validate something that isn't yet established, the validity of the rpi formula.
 
You keep referring to a system that has not been established to be valid, as though continuing to state it would make it valid.

Let me try to explain it one last time. Let's say:

1) we want to establish the effect of a medication on rats
2) we must administer the medication in the same way to every rat
3) we must have every rat maintained in the same exact manner throughout the procedure: diet, temperature, lighting, etc.
4) we must have a control group that does not receive the medication, but which undergoes every other aspect of the experiment exactly the same as the rat that undergoes the medication.

In this way, and only in this way, can you define the effect of a medication. You must have a control population. This is also the reason that human experimentation is difficult. Most don't want to be treated as test subjects with every aspect of their lives controlled.

You test one thing at a time.

Now, you want to compare conferences? You have to have a controlled environment. Since that isn't possible, you reduce the variables as much as you can.

You can't compare conferences while using data within a conference as a part of the comparison. You compare how they perform against others, not against others and within at the same time.

If you didn't learn this, how could you perform statistics? You just keep repeating the same mantra--that you must include rpi against conference teams. It is the rpi that is in question. Can you get that into your mind? It is the validity of the rpi that is in question.

If we get the right questions answered, we may find that the SEC is stronger. But, we can't unless we can validate something that isn't yet established, the validity of the rpi formula.

Congrats! I took the bait and let you divert the discussion from the effectiveness of the RPI rankings/seedings which has been previously documented as very effective over the last decade to a discussion about testing techniques which is germane to nothing regarding the RPI rankings/seedings. As always you want to talk about anything but the subject matter.

You want non-conference performance to be the sole criteria for comparing teams. Hogwash! While the committee does use that criteria as part of their comparison and provides a non-conference RPI ranking they also conclude several other factors (listed in previous post) must also be considered in their comparison and have devised systems to track, evaluate and compare this factors in determining their seedings.

You must measure and compare all factors that impacts team results if that is what you are attempting to compare. Difficulty measuring and comparing a factor does not warrant excluding it as a comparison factor which is what you are doing when disregarding conference performance.

The comparison you want has long been done by the committee with their non-conference rpi rankings. But they rightly measure a multitude of other factors that impact team results and can effectively be used in making a final seeding. But just for you their non-conference RPI results are as follows.

N-C RPI--------------Adj RPI

#1 FSU -------------------5
#2 UCLA------------------2
#3 Oregon----------------1
#4 Washington-----------6
#5 OU---------------------4
#6 ASU -------------------7
#7 Tennessee-------------8
#8 LSU-------------------11
#9 Kentucky-------------13
#10 Florida ---------------3
#11 Georgia-------------11
#12 S. Carolina-----------9
#13 Baylor---------------14
#14 Arizona--------------12
#15 Ohio State-----------30
#16 Long Beach State---21
#17 Mississippi State----20
#18 A&M-----------------15
#19 Louisiana------------22
#20 Auburn--------------18
#21 Alabama-------------16
#22 Arkansas ------------17
#23 Nebraska ------------45
#24 Minnesota -----------24
#25 Michigan -------------31

Notice a trend among conferences using your non-conference criteria. 8 of the top ten and 16 of the top 25 are from the SEC/Pac-12. The numbers for the overall adjusted RPI are 8 and 17. 11 are SEC teams and the two other SEC teams that made the 64 team tourney were Missouri #28 and Mississippi #40 making it 13 for 13 from the SEC using your criteria.

https://extra.ncaa.org/solutions/rpi/Stats Library/SB Nitty Gritty Through 5-13-2018.pdf
 
Last edited:
Congrats! I took the bait and let you divert the discussion from the effectiveness of the RPI rankings/seedings which has been previously documented as very effective over the last decade to a discussion about testing techniques which is germane to nothing regarding the RPI rankings/seedings. As always you want to talk about anything but the subject matter.

You want non-conference performance to be the sole criteria for comparing teams. Hogwash! While the committee does use that criteria as part of their comparison and provides a non-conference RPI ranking they also conclude several other factors (listed in previous post) must also be considered in their comparison and have devised systems to track, evaluate and compare this factors in determining their seedings.

You must measure and compare all factors that impacts team results if that is what you are attempting to compare. Difficulty measuring and comparing a factor does not warrant excluding it as a comparison factor which is what you are doing when disregarding conference performance.

The comparison you want has long been done by the committee with their non-conference rpi rankings. But they rightly measure a multitude of other factors that impact team results and can effectively be used in making a final seeding. But just for you their non-conference RPI results are as follows.

N-C RPI--------------Adj RPI

#1 FSU -------------------5
#2 UCLA------------------2
#3 Oregon----------------1
#4 Washington-----------6
#5 OU---------------------4
#6 ASU -------------------7
#7 Tennessee-------------8
#8 LSU-------------------11
#9 Kentucky-------------13
#10 Florida ---------------3
#11 Georgia-------------11
#12 S. Carolina-----------9
#13 Baylor---------------14
#14 Arizona--------------12
#15 Ohio State-----------30
#16 Long Beach State---21
#17 Mississippi State----20
#18 A&M-----------------15
#19 Louisiana------------22
#20 Auburn--------------18
#21 Alabama-------------16
#22 Arkansas ------------17
#23 Nebraska ------------45
#24 Minnesota -----------24
#25 Michigan -------------31

Notice a trend among conferences using your non-conference criteria. 8 of the top ten and 16 of the top 25 are from the SEC/Pac-12. The numbers for the overall adjusted RPI are 8 and 17. 11 are SEC teams and the two other SEC teams that made the 64 team tourney were Missouri #28 and Mississippi #40 making it 13 for 13 from the SEC using your criteria.

https://extra.ncaa.org/solutions/rpi/Stats Library/SB Nitty Gritty Through 5-13-2018.pdf

Again. It is the formula of the rpi that we don't trust. You keep trying to state that it is accepted and correct. There are a great number of people who do not accept the validity of the rpi. It is not effective unless it is accepted.

The fact that the NCAA, ESPN, and committee have accepted it as the primary factor is exactly what is at question. When you state that the committee has done this, you may have accepted it. Not everyone does. The fact is that the committee members have stated that the final determinant is an eyeball test, which means that even they don't accept it.

I don't think that the formula for the rpi weights where a game is played adequately. You accept it. Let me suggest that there is enough question that it will eventually be changed.
 
Again. It is the formula of the rpi that we don't trust. You keep trying to state that it is accepted and correct. There are a great number of people who do not accept the validity of the rpi. It is not effective unless it is accepted.

The fact that the NCAA, ESPN, and committee have accepted it as the primary factor is exactly what is at question. When you state that the committee has done this, you may have accepted it. Not everyone does. The fact is that the committee members have stated that the final determinant is an eyeball test, which means that even they don't accept it.

I don't think that the formula for the rpi weights where a game is played adequately. You accept it. Let me suggest that there is enough question that it will eventually be changed.

No it is the formula of the RPI's you don't trust despite the historical accuracy of their results. It might not be accepted but that does not determine its accuracy. What the committee and NCAA membership has accepted is their recognition that the RPI rankings while not absolute is doing an effective job of fulfilling its charter.

The RPI is not a utopic solution. It has its faults. It has its things you, others and I would like to see changed. It needs to continue to be a dynamic system that is constantly evolving toward a more ideal selection process.

It also has problems that will never be resolved such as margin of victory because of the system's desire to avoid the gambling entity. There will always be a disparity in comparisons because teams do not play the same schedule and never will. The sports season cannot be easily controlled for clinical study. Too many absolute variable difficult to measure.

I abhor the absence of transparency where we can see every minute calculation, you despise the inclusion of conference results became you are comparing apples to oranges and I would like to see more weight on WP% and less weight on SOS. Many others have their complaints. But all of our objections are subjective as we have no documentation to justify any changes to improve results. And results are the name of the game.

What we do know is that the historical results of the system has been exceedingly successful in projecting the participants in the Final Four, Women's Final Four, CWS and WCWS. And it is the best accepted alternative available today for its participants.

But those highly accurate results don't change our opinions which are the basis of American life where a landslide is a 54% majority with totally 46% opposed. The RPI is no different about half the people will like it and half will hate it depending on how it affects their team, their conference or appears to them to favor others. Regardless what changes are made and how accurate the RPI becomes over time half the people are still going to despise it. Just like they do/did the AP poll, Sagarin rating, Palm projections past computer polls, etc. It is the American way of life and applies to all things from race to religion, to politic and to sports. It comes from better than 300 million people all having their own opinion.

It is getting a consensus that is the challenge regardless the subject. But the RPI does a very good job. I feel certain as do you that it could do a better job. But getting a consensus for more than a moment is extremely doubtful.
 
Last edited:
You assume the results are accurate because you accept them. I don't. I would insist that the formula be revised. I don't think it can be perfect. But, it can be a lot more useful than it is.

By now, if you haven't noticed, I simply do not accept your premise that everything is as it should be.
 
Think of OU softball, and you might think of Lauren Chamberlain’s home run records or Keilani Ricketts’ prolific strikeout rate. Even now, there’s Paige Parker’s power-pitching dominance and Jocelyn Alo’s emerging pop at the plate.

But behind OU’s past two national titles — and now the Sooners’ quest for a third, starting Friday in the NCAA Norman Regional — is a renewed focus on the fundamentals. That’s always what Gasso has cared about the most.

“We’ve always been focused on defense,” Gasso said. “That’s kind of my pride and joy because I work with them a lot on it. The difference with this team today versus, say, the 2013 team is just power. They had that pretty much anyone in the lineup could hit it out. The focus with this team is hit-and-run, run-and-hit, put the ball in play, squeezes, bunts with two outs — just execution and catching them off-guard and keeping a defense uncomfortable.”

Link
 
On another board, I indicated that what impressed me most about this team was the defense. I don't know that I have ever seen a softball team that had more great play abilities and refrained from making mistakes as well as this team.

I thought Arnold was the best third baseman I had ever seen, until Romero took over. I expect the hot shots down the line to be handled as though they were simple bounding balls with Sydney on third. It's about the same with Shay at first. I never thought of her as a defensive player until I noticed that she caught the shots down the line with great regularity, adding a good pursuit of pop fouls. She made a simple pickup of a line shot to her backhand in the Big Twelve that stunned me. She just gently turned and took a couple of steps to touch the bag.

Clifton has so much range that she is almost like have a short-fielder in right center. Would it be an exaggeration to suggest that we may eventually see her catch a popup at the right field wall? OK. but how much of one?

Until she injured her hamstring, Arnold always seemed to make the right play.. She's made a couple of mental errors, laying back on slow ground balls that are uncharacteristic, and I don't know what it has to do with her injury. But, she is consistent at what she does. Catch a ground ball, and throw it to first. Life is so simple.

We'll miss Pendley next year. I so often see centerfielders streaking to make a great catch in right or left center. Sometimes, that is because they got a late start or misread the ball at first. They use their speed to recover. Pendley seems to play in slow motion. She gets an early read and glides to the ball. You really don't notice how fast she is. She just makes the right read over and over. I see a little of her in Mendes, but Mendes is still learning.

The defense up the middle with Pendley, Arnold, Clifton, Parker (Lowary), and Wodach is what you hope a defense will be, except that the corners may be even better. College teams just aren't that good on defense. But, the failure of the Auburn defense let us win one title.
 
You assume the results are accurate because you accept them. I don't. I would insist that the formula be revised. I don't think it can be perfect. But, it can be a lot more useful than it is.

By now, if you haven't noticed, I simply do not accept your premise that everything is as it should be.

Typical you would assume the results are inaccurate because of your unsubstantiated subjective doubts which I mentioned could be resolved with more transparency. Do you not think that the President's and their AD's are not wanting accuracy justification on seedings that impact their financially strapped athletic departments? Moreover getting approximately 130+ schools to bite their tongue and not adamantly complain publicly about an unfair system would require one hell of a conspiracy.

It would also be more logical to assume that member schools are working within the system registering their desires for system enhancements and/or asking questions about what they see as system flaws.

But until you or I present and document a better system we are just ill informed complainants that think we know more about how to evaluate and compare teams for seeding purposes than their hired professionals. Meanwhile the RPI is a proven effective system (over the last decade 11 or 55% of the finalist have been top 4 seeds, 6 or 30% 5-8 seeds, 2 or 10% 9-16 seeds and 1 or 5% unseeded) that will continue to be used for the foreseeable future because their membership supports it.
 
Last edited:
On another board, I indicated that what impressed me most about this team was the defense. I don't know that I have ever seen a softball team that had more great play abilities and refrained from making mistakes as well as this team.

I wish this board alerted me when there were quote-replies to posts. I should put that on the request thread :)

It is quite apparent that this team will make opponents pay for any mistake at any time this postseason --- no top or bottom of the innings off!
 
Guerin Emig's latest column just went up...

Oklahoma has captured back-to-back national championships by steamrolling the Big 12 Conference, handling the best the SEC and Pac-12 have to offer, and surviving Emily Watson.

If the Sooners want a three-peat, they must survive her again.

Watson and the Tulsa Golden Hurricane are at the Norman Regional this weekend. They’ll face Missouri on Friday at 3:30 p.m., before OU faces Boston University at 6. The Sooners and Hurricane must win to meet in Saturday’s winners bracket, but c’mon, who are we kidding?

If there is one thing destined to happen in the NCAA postseason this week, it’s Watson getting a final crack at disintegrating OU’s mini-dynasty.

Link
 
Typical you would assume the results are inaccurate because of your unsubstantiated subjective doubts which I mentioned could be resolved with more transparency. Do you not think that the President's and their AD's are not wanting accuracy justification on seedings that impact their financially strapped athletic departments? Moreover getting approximately 130+ schools to bite their tongue and not adamantly complain publicly about an unfair system would require one hell of a conspiracy.

It would also be more logical to assume that member schools are working within the system registering their desires for system enhancements and/or asking questions about what they see as system flaws.

But until you or I present and document a better system we are just ill informed complainants that think we know more about how to evaluate and compare teams for seeding purposes than their hired professionals. Meanwhile the RPI is a proven effective system (over the last decade 11 or 55% of the finalist have been top 4 seeds, 6 or 30% 5-8 seeds, 2 or 10% 9-16 seeds and 1 or 5% unseeded) that will continue to be used for the foreseeable future because their membership supports it.

Nonsense. The typical softball fan can probably pick the top two teams most likely about half the time, certainly two of the top four. Of course, the rpi didn't even have last year's champion in the top eight---and only the rpi.
 
Nonsense. The typical softball fan can probably pick the top two teams most likely about half the time, certainly two of the top four. Of course, the rpi didn't even have last year's champion in the top eight---and only the rpi.

We have a diverse group of posters around these parts. Sounds like you earned your post-graduate degree from UCS (the University of Common Sense).
 
Nonsense. The typical softball fan can probably pick the top two teams most likely about half the time, certainly two of the top four. Of course, the rpi didn't even have last year's champion in the top eight---and only the rpi.

You amuse me. The typical softball fan would so swayed with local and regional team and conference bias that you could not get consensus agreement.

Want to attempt to partially validate your statement regarding projecting the final two. Create a poll of the local softball fans that post here requesting they list their projection for this year's WCWS finals. Restrict viewing the poll until it is closed. See what % pick the two finalist. It will not approach 50%

Now image that same poll taken in Arizona, California, Oregon, Washington, Florida, Georgia, Louisiana, Minnesota, Michigan, Ohio, Texas and Tennessee and tell me that 50% are going to accurately pick the two finalist. You live in a dream world. 10-15% maybe. But the mathematical odds of picking the two finalist form 8 teams is about 1.78% and picking from 4 teams 8.32%. Expecting 50% of the typical softball fan from across the country to be about 28 times more efficient picking the winners than the odds is outlandish.

But outlandish statements and diversions are your forte. I might even suggest your applying for the Trump Presidential staff. He need some of your diversion skills for his candid tweets.
 
Thought you all would like to read the lead-in to OSU softball feature today...

STILLWATER — The tradition of reading the senior letters began two years ago on a Friday afternoon before Kenny Gajewski’s team hosted Oklahoma at Cowgirl Stadium.

It was a horrible mistake.

Three hours before first pitch, Gajewski gathered his team. He brought envelopes containing the letters he asked Oklahoma State’s seniors to write before the season — letters expressing their goals and reflecting on their careers. He asked selected freshmen to read them aloud.

Then the OSU seniors read letters Gajewski penned to them. Emotions poured out.

By the time the 6 p.m. Bedlam showdown began, Oklahoma State was spent. It surrendered five runs in the top of the first inning and got run-ruled in five. Gajewski learned — the hard way — that the tradition was not a pregame activity.

Link
 
Back
Top