2016-09-06

By Paul Fain



Last month the U.S. Department of Education unveiled eight applications it had selected to participate in an experiment to allow students to use federal financial aid to attend programs run by colleges and nontraditional providers, including boot camps, companies offering online courses and employers. Each partnership also features a quality-assurance entity, which will act like an alternative form of accreditor.

The project — dubbed Educational Quality through Innovative Partnerships program, or EQUIP — has both fans and critics. So Inside Higher Ed moderated a debate over email between Paul LeBlanc, president of Southern New Hampshire University, who helped create EQUIP during a stint at the department last year, and Barmak Nassirian, director of federal relations and policy analysis for the American Association of State Colleges and Universities, who has questioned aspects of EQUIP. The exchange follows.

Q. How could the EQUIP program open a “loophole” to future waste, fraud and abuse, as you and other critics have argued?

Nassirian: While the colleges participating in the program are highly reputable institutions, there are multiple grounds for concern with this initiative and the policy goals that it may be designed to rationalize. The program is labeled an experiment but fails virtually every design requirement for one. It does not draw from a random sample of institutions or potentially eligible new providers, uses actual students with no apparent safeguards against consequences of failed experiments, relies on predictions of unproven third parties who have nothing to lose if they turn out to be grossly incorrect and will end (presumably declaring victory by acclamation) way before actual data on its long-term impact on students and taxpayers could be known.

Given all this, it seems that EQUIP is really intended as a pretextual prototype of a policy whose architects believe couldn’t possibly go wrong. That this judgment is being rendered by an agency with a consistent history of gatekeeping failure ought to be quite alarming. The U.S. Department of Education has had a catastrophic track record of poor oversight of current providers, and here it is on the verge of expanding the universe of potential participants to thousands of unknown players on the say-so of untested and questionable new quality-assurance entities.

I appreciate the instinct behind the effort, which is to desperately seek out ways in which to broaden access, contain costs and promote innovation. But I’m afraid that the urgency of quickly realizing these ideals is blinding policy makers to the predictable pitfalls of what they are proposing. We’ve been down this road before with previous panaceas. Consider, for example, how the elimination of the 50 percent rule [which allowed all of an academic program to be offered virtually or online, whereas previously programs had to be at least 50 percent in person] 10 years ago was supposed to accomplish all of the above, but gave us a decade of rampant fraud instead. I fear that EQUIP is just greasing the skids for a new cycle of waste, fraud and abuse in the hands of fly-by-nights, “disruptive innovators” and here today, gone tomorrow start-ups.

LeBlanc: As someone at the table when EQUIP was designed, I can attest to the keenly felt anxiety among many in the department about not revisiting the fraud and abuse that followed the lifting of the 50 percent rule. That concern has had a significant shaping influence on EQUIP. How so? Critics within the department successfully limited the number of approved experiments to eight when many of us hoped for two dozen or more. The quality-assurance entities that will offer up new approaches to quality assurance for non-institute of higher education (IHE) providers must do evaluations at multiple points in the program schedule (early stage, midpoint, completion and post program) and report on that provider’s performance to the IHE partner, the regional accreditor and the department every six months. Reinforcing this belt-and-suspenders approach, the host IHE’s accreditor must also approve the partnership. In other words, the experiment is being kept on a very short leash and is under a high level of scrutiny.

When we look back at the lifting of the 50 percent rule, there was nothing like EQUIP’s safeguards in the 1998 demonstration project that tested a waiver of the rule, nor in the 2005 Higher Education Reconciliation Act that codified the rule’s elimination and spawned the rapid growth of online education. In stark contrast, the most important goal of EQUIP is around discovering new and better approaches to ensuring program quality.

Its focus on outcomes, outputs, rigor in design and assessment, and transparency is exactly what we need as we assess new innovative approaches to educational delivery, approaches that try to lower cost, increase access and improve quality — a goal that Barmak agrees with in his comments. That focus on quality, how we know and what we report is fundamentally different than the 50 percent rule change and its lack of quality assurance. It is precisely why there should be support of EQUIP.

I had to chuckle just a bit at the idea that EQUIP was rushed into being with false urgency. I started working on it in March 2015, when I started a three-month stint within the department, and it has taken 18 months of discussion, debate and labored-over detail to get this far. There’s much work still to be done before we see the first programs approved and launched. I wouldn’t get too hung up on the word “experiment” here. Better to think of EQUIP as simply creating a very modest and tightly controlled space to invite innovation around high-quality programs and new quality assurance, to learn and to inform eventual policy making. Eight tightly controlled partnerships hardly puts us on the “verge of expanding the universe of potential participants to thousands of unknown players on the say-so of untested and questionable new quality-assurance entities.”

If we are to eventually reform and improve higher education, we need the sandboxes that programs like EQUIP provide — carefully managed “safe zones” for trying new things and assessing their efficacy.

Q. Could the experiment lead to an alternative form of accreditation, and are you happy with the mix of selected quality-assurance entities?

LeBlanc: For many of those working on the design of EQUIP and then advocating for it within the Education Department, the most important part of the experiment is the standing up of new approaches to quality assurance that could either lead to improved accreditation practices among existing accreditors or the creation of new, alternative accreditors. Fair or not (I think much is unfair), there is pervasive criticism of existing accreditation in and outside the department. And whatever the criticism, I think we’d acknowledge that most accreditation is built on a foundation of inputs or prescriptions. I led the working group that created the quality-assurance entity questions outlined in the Federal Register notice that announced EQUIP, and there are three immediate observations one might objectively make by looking at the questions being asked of the QAE process:

The questions focus on detailed outcomes and outputs;

There is rigor defined to an unusual degree (note, for example, item B3 on the validity of assessments);

There is a demand for transparent data and reporting.

Indeed, one could legitimately ask how many traditional institutions would today pass muster if their accreditors adopted these standards.

Generally, I am happy with the mix of quality-assurance entities chosen for the experiment. It includes some very traditional established players like the American Council on Education and the Council for Higher Education Accreditation, some players who have worked in quality assurance in other ways and are now bringing their strengths to the challenge (the American National Standards Institute and Quality Matters), and some QAEs not previously on the radar screen that may take very novel approaches to the questions outlined in the Federal Register (such as Entangled Ventures, Climb and Tyton Partners). The fact that some innovators roll their eyes at the more traditional players while some traditionalists do the same for the new, largely unknown QAEs is probably a good sign in the end. As a skeptic of some specialized accreditation, I’m not sure I would have included HackerRank, given the narrowness of their scope. But if they can successfully address the questions we outlined, more power to them.

The idea — one can’t say it enough, but this is an experiment — is to see how well these new approaches work. It’s easy to imagine that some might fall short of what is desired, others might contribute some valuable new approaches or thinking to accreditation and others may prove so effective that they could be encouraged to apply as Title I approved accreditors. There was some hope that existing regional accreditors might participate, and while none raised their hands in the end, they will be watching closely and it’s quite possible that the experiment sparks new thinking among them.

Nassirian: Unlike Paul, I am not as sanguine about accreditation and its effectiveness in assuring institutional integrity, although I do continue to harbor hopes that we can restore and preserve accreditation through fairly minor tweaks to its statutory function in Title IV. Having said this, I think we should also be open to alternative mechanisms of quality assurance in case efforts to get the accreditors to take their gatekeeping role more seriously continue to prove futile.

Unfortunately, EQUIP’s quality-assurance entities fall short of what is needed for purposes of validating hitherto untested providers. First of all, the entities themselves are untested in the roles to which they are assigned, a fact that makes them poor candidates for assessing a high-risk program. Furthermore, while I agree that the focus of their assessment of programmatic performance is shifted from inputs to outcomes, it is important to note that they are not actually observing the most crucial long-term outcomes, but are in effect vouching that they will be realized. I say this because it’ll be years into the future before we really learn how the programs might have affected the students on whom this experiment is being carried out.

In fact, EQUIP’s quality-assurance methodology is even more abstract and less reliable than that of current accreditors, who, at least in theory, make a judgment about the adequacy of observable resources and real inputs. I’m afraid that most of EQUIP’s quality assurance will be reducible to grading glowing reports of participating programs, solely on the basis of how narratively compelling they sound, not on the basis of actual data and facts. Ironically, the government could have used actual data for quality-assurance purposes, as Tony Carnevale argues, but has opted for a Rube Goldberg alternative. Added to the two concerns above is the fact that, just as with our current quality-assurance regime, the selected entities face no adverse consequences for failure. There is no penalty or even minimal risk retention for poor judgment by entities whose oversight is a condition of eligibility for millions of dollars of federal funding. And lastly, there are real questions about the resources, capabilities and potential conflicts of some of the entities selected as quality-assurance agents.

Q. Was it a mistake for the department to pick mostly for-profits as the alternative providers? And is your concern with EQUIP or that some future version of it — without safeguards — gets codified by legislation?

Nassirian: I am not so much concerned about their tax status as I am about the fact that the product they are offering is untested and a little too good to be true. It’s important to take a step back and really think about the hypothesis that this pseudo-experiment is supposed to be testing, which is that these participants can somehow more effectively produce better learning and employment outcomes faster and less expensively than schools. And what do the advocates of giving these players a shot — with real students and real federal money, no less — offer as probable cause that these fantastic claims are worth testing in the first place? The very fact that they have no significant track record! In a more orderly policy environment, these claims would be verified in the marketplace with private money over a longer haul before they are elevated to the status of candidacy for experimental federal funding. I think policy makers are in such panic to find easy solutions to vexing and complicated problems of access and affordability that they have suspended all disbelief.

And yes, I am quite concerned that this initiative will play into the hands of industry lobbyists working the Hill by making the ludicrous respectable enough to be written into legislation. The department may think it is moving prudently, but I remain unimpressed with the safeguards and oversight mechanisms of the program. There are inherent problems with relying on attestations and forecasts of quality-assurance entities, some of which have significant conflicts of interest. In any case, the U.S. Congress and the industry won’t wait for any results to come in before they universalize the concept that entities that don’t even purport to be schools should gain access to billions of dollars of federal student aid. And what little actual monitoring there is will be swapped out in favor of a simple veneer of oversight, just as with the current system.

LeBlanc: It is often said that this administration has it out for for-profits, and it is undeniable that (A) it has aggressively gone after for-profits it believes are poor quality, offering poor outcomes or engaged in outright fraudulent activity and (B) the Education Department has people with a reflexive antipathy towards any for-profit provider. Like Barmak, many people in the department, including the Office of Inspector General, are scarred by the abuses of the correspondence programs and for-profits they saw emerge after the 50 percent rule was lifted.

However, there are four problems in Barmak’s response here, in my view:

It continues to ignore the multilevel safeguards that EQUIP has in place and that I described earlier;

It ignores that these are not for-profits acting alone, but in partnership with IHEs whose reputations and good standing are also at stake here, excellent institutions like Northeastern University and the University of Texas at Austin;

It ignores that there is now a sizable contingent of skeptics within the department who will be watching this experiment closely;

Most importantly, it is simply not true that these providers are untested. For example, the Flatiron School has something like a 99 percent placement rate at graduation with starting salaries that average $75,000. Moreover, they use an independent third-party auditor to verify their outcomes and publish those. If an IHE is partnering with a company in designing an advanced manufacturing program, it is hard to imagine a better partner than General Electric.

I might not have made all the choices the selection committee made, such as bachelor’s degree programs (there is an enormous number of affordable options out there for those), but there is nothing “fantastic” about the claims being made. My hope is that the department will soon release the actual proposals so people can see for themselves.

As for the dangers of making bad policy, the logic here is: don’t create any safe places to innovate new models because bad actors may subsequently take those models and legislate impoverished or poor quality versions of them. That feels like a recipe for never getting better. I suggest that the real problem here is in policy making, not in safe innovation spaces like EQUIP. Indeed, we need more safe spaces like EQUIP to try new things, learn and inform eventual policy making. Safe spaces also include room for making mistakes and learning from them. Any of us who work at innovation know this from experience. Policy makers struggle with that fundamental truth, so resist efforts like EQUIP or build in so many safeguards and restrictions that it becomes almost impossible to actually innovate. I wholly agree with Barmak that an eventual version of EQUIP open to all without the considerable safeguards that EQUIP provides would be a disaster, but that is a problem in policy making, not in EQUIP itself, and the answer can’t be a version of the slippery slope argument, which is never try anything new.

Nassirian: First, I want to emphasize that the problem here is with the basic thesis and the framing of this misadventure, and that the for-profit/nonprofit distinction is not the proper lens for evaluating the initiative.

Second, the placement and salary data that Paul cites are self-reported and highly exaggerated, as has been reported in the news media. Also, it is doubtful that these niche offerings can scale and still remain as effective and lucrative as they claim to be. There is a limit to the number of people needed for coding jobs, for example. In addition, there’s the problem of long-term impact of these programs, which will take a couple of decades to come in. We have had too many examples of short-term labor-market phenomena that have proven unsustainable over the long haul. We couldn’t get enough people with basic HTML skills in the late 1990s, but those jobs disappeared within a few years. Postsecondary education is like an annuity. The costs are incurred up front, but the benefits are supposed to trickle in over many years.

Finally, I am struck by Paul’s use of the term “safe spaces” and want to understand what he means by that. Safe for whom? Certainly not for the students who are used as canaries in the coal mine. If any of these experiments fail, what protection or recourse will the victims have? They will have already exhausted a good chunk of their Pell eligibility and they may have racked up debt on top of that. The program is certainly not safe for them, unless, of course, Paul is absolutely certain that none of these programs could possibly fail. And that would bring me back to my initial objection: EQUIP is not so much a test of a credible hypothesis as it is a pretext for doing things that have been decided a priori.

LeBlanc: The article and misreporting of salary data that Barmak cites surrounds providers not included in EQUIP. I cite the Flatiron School, that does not self-report, but has an independent auditor verify its placement data. Niche offering? At least one study projects a need for 1.4 million full-stack web developers over the next five years and a one-million-person shortfall on the supply side. Tell companies screaming for these positions that they are niche. I’m not saying that these programs are for everyone or should replace the four-year degree with all of its lifelong benefits, but if EQUIP allows a low-income person to access a program that virtually guarantees them a job and a starting salary of $75,000, I’ll take it (and please don’t again cite the schools that fall short — they are not in EQUIP).

As for the safe spaces for students, EQUIP requires full disclosure to prospective students regarding the nature of the program and its being part of EQUIP and what that means (including termination and teach-out of the program). In addition, the partnerships must describe in detail the ways they will make students whole, including loan repayment and refunds “above what is normally required of them under the existing Title IV, HEA program regulations.” Did you read the actual Federal Register notice?

There’s a “people in glass houses” dimension to Barmak’s objections, as traditional nonprofit higher education has millions of students who exhaust their Pell Grants, don’t complete a degree and rack up enormous amounts of debt. The difference here is that EQUIP asks far more of the providers than do the existing regulatory and accreditation frameworks and provides far greater transparency and testing of the claims providers make to students. Barmak and I share a real fear of an eventual legislation that allows watered-down quality control and protections for students — EQUIP does neither — and we need to bring to any eventual policy discussion the kind of rigor that has been brought to EQUIP over the last 18 months.

Nassirian: I agree that Flatiron has been a good operation. But since Paul brings up their audit, I suggest that readers take a look at it and judge for themselves as to whether the methodology and the n-count gives them his level of comfort. They have a grand total of 244 graduates over a two-year cycle, and generate their stunning statistics on the basis of excluding many of the nonemployed graduates for various reasons. And this is before the spigot of easy federal financing is turned on. Oh, and the entire report is based on tracking graduates for a whopping 120 days. As a reminder, standard amortization for federal loans has a 120-month term, and runs much longer for many borrowers.

As to giving low-income people guaranteed $75,000 starting salaries, yes, that sounds great, but there is literally zero evidence that coding boot camps can do that. Their students are disproportionately college graduates, with significant employment experience and great credit histories or $12,000-$20,000 on hand to pay their fees. How you get from outcomes for that population to a belief that identical results would also be achievable for low-income students, I don’t know. What I do know is that there is a feeding frenzy among some of the worst actors in the for-profit sector to acquire boot camps, for reasons that should be obvious.

LeBlanc: So, Barmak, you are blaming them for being too young to have more data? The 244 students they report on are all of their graduates thus far — it is as complete a sample as it can be. The exclusions? Graduates who returned to their home countries, others who went on to graduate school and others who started their own companies instead of going to work for someone. Those seem like very reasonable exclusions to me. The 120 days — four months — is a very reasonable milestone for determining the success of the program in placing people in good-paying jobs. On that front, they perform exceedingly well. If you were to look closely at their program, you’d see that it is rigorous and demanding in ways wholly absent from the fraudulent programs you fear.

When you write, “As to giving low-income people guaranteed $75,000 starting salaries, yes, that sounds great, but there is literally zero evidence that coding boot camps can do that,” you are simply incorrect in this case. Flatiron is partnering with New York City to offer the program to lower-income students without college degrees in Brooklyn. And while those students require about 120 additional hours of “bridging” or ramp-up time, their outcomes equal those of the college educated students in the Manhattan-based program, while their average starting salaries are slightly higher.

More importantly, the quality assurance that EQUIP demands of QAEs doesn’t allow the “worst actors” or any actors to get by with poor-quality programs. Look at the questions being asked — the level of detail, the focus on outcomes and outputs, the rigor demanded — and tell me where EQUIP falls short. You keep saying you are not persuaded, but have not yet addressed the details of the program. Not to mention the very short reporting intervals (every six months) and the multilayer oversight (institutional partner, accreditor, Education Department) for the QAEs and new providers. Add to that the aforementioned protections for consumers.

The response seems to be:

I wish for-profits were not allowed to be part of the higher educational landscape;

There are bad actors out there who would love to find ways to access Title IV dollars;

We should not try anything innovative because policy makers might later on take only parts of what works and create bad policy; and

I don’t trust the department to manage this program well.

That is not really a detailed critique of EQUIP, an appraisal of its actual design. It’s more a contextually situated unwillingness to take a look and acknowledge the genuine strengths, safeguards and advantages of a well-designed, if modest, program.

Q. Should ACE and CHEA have been included as QAEs? Critics said they, as establishment players, would have conflicts of interest.

LeBlanc: There are really smart people at both ACE and CHEA and both have been grappling with the recent criticisms of accreditation and new ways to think about quality assurance, especially with the emergence and growth of competency-based education. For many working on EQUIP, there was also some hope that at least some of the more traditionally situated players would want to participate. So on those grounds, I’m generally OK with their participation. I’d be happier if they were two of 20 rather than two of just eight, since the goal was to encourage as many new approaches to quality assurance as possible, knowing that in each case as backup a regional accreditor had to review and approve (as well as the department itself). It may be telling that the two partnerships they are involved with are the two most conventional offerings on the list, but it is also encouraging to see CHEA talking about things like repayment ability as part of the outcomes it will examine. Again, I’m eager to see the full proposals, as the shorter descriptions make it hard to understand what’s innovative about the ACE approach of surveys. (To be fair to ACE, the provided descriptions are shorthand and there is likely much more at work there.)

If the department too much ignored the quality-assurance questions we originally developed and that it listed in the Federal Register notice, I will suddenly join Barmak in being far more worried than I am thus far. Fundamentally, those who worry that a CHEA or ACE are too grounded in the traditional to do something innovative and effective in evaluating the new providers are arguing that only outsiders can innovate. That is often true. I hope to be pleasantly surprised.

Nassirian: I certainly agree that there are very smart people at ACE and CHEA but question the wisdom of assigning federal gatekeeping responsibilities to trade associations that lobby for schools. We have been down this road before, and the results were as terrible as any reasonable person would expect them to be. Congress explicitly disqualified trade associations from serving as accreditors back in 1992 for this very reason, but the department has decided to ignore that blanket ban by slapping a new label on the function because … it couldn’t find smart people anywhere else?

Editorial Tags:

Accreditation

Distance education

Federal policy

Job training

Image Caption:

Barmak Nassirian and Paul LeBlanc

Is this breaking news?:

Source: Inside Higher Ed

    

Show more