Thursday, May 30, 2013

Leslie Fenwick: Urban Education Reform Is Really About Land Development and Money




Ed school dean: Urban school
reform is really about land development (not kids)

Dean Leslie Fenwick (howard.edu
Dean Leslie Fenwick (howard.edu
(Correction: Fixing publication date for book, and 
removing quote attributed to book)
LINK
Here is a provocative piece from Leslie T. Fenwick,
dean of the Howard University School of Education
and a professor of education policy, about what is
really behind urban school reform. It’s not about
fixing schools, she argues, but, rather, about urban
land development. Fenwick has devoted her career
to improving educational opportunity and outcomes
for African American and other under-served students.
By Leslie T. Fenwick
The truth can be used to tell a lie. The truth is that black parents’ frustration 
with the quality of public schools is at an all time righteous high. Though black 
and white parents’ commitment to their child’s schooling is comparable, more 
black parents report dissatisfaction with the school their child attends. 
Approximately 90 percent of black and white parents report attending parent 
teacher association meetings and nearly 80 percent of black and white parents 
report attending teacher conferences. Despite these similarities, fewer black 
parents (47 percent) than white parents (64 percent) report being very satisfied 
with the school their child attends. This dissatisfaction among black parents is so whether these parents are college-educated, high income, or poor.

The lie is that schemes like Teach For America, charter schools backed by 
venture capitalists, education management organizations (EMOs), and Broad Foundation-prepared superintendents address black parents concerns about 
the quality of public schools for their children. These schemes are not designed 
to cure what ails under-performing schools. They are designed to shift tax dollars 
away from schools serving black and poor students; displace authentic black 
educational leadership; and erode national commitment to the ideal of public 
education.
Consider these facts: With a median household income of nearly
$75,000, Prince George’s County is the wealthiest majority black
county in the United States. Nearly 55 percent of the county’s businesses
are black-owned and almost 70 percent of residents own homes,
according to the U.S. Census. One of Prince George’s County’s
easternmost borders is a mere six minutes from Washington, D.C.,
which houses the largest population of college-educated blacks in
the nation. In the United States, a general rule of thumb is that
communities with higher family incomes and parental levels of
education have better public schools. So, why is it that black parents
living in the upscale Woodmore or Fairwood estates of Prince George’s County or the tony Garden District homes up 16th Street in Washington
D.C. struggle to find quality public schools for their children just like
black parents in Syphax Gardens, the southwest D.C. public housing community?

The answer is this: Whether they are solidly middle- or upper-income
or poor, neither group of blacks controls the critical economic levers
shaping school reform. And, this is because urban school reform is not
about schools or reform. It is about land development.

In most urban centers like Washington D.C. and Prince George’s
County, black political leadership does not have independent access
to the capital that drives land development. These resources are still controlled by white male economic elites. Additionally, black elected
local officials by necessity must interact with state and national officials.
The overwhelming majority of these officials are white males who often
enact policies and create funding streams benefiting their interests and
not the local black community’s interests.

The authors of “The Color of School Reform” affirm this assertion in
their study of school reform in Baltimore, Detroit and Atlanta. They
found:

Many key figures promoting broad efficiency-oriented reform initiatives
[for urban schools] were whites who either lived in the suburbs or sent
their children to private schools (Henig et al, 2001).

Local control of public schools (through elected school boards) is
supposed to empower parents and community residents. This rarely
happens in school districts serving black and poor students. Too often
people intent on exploiting schools for their own personal gain short
circuit the work of deep and lasting school and community uplift.
Mayoral control, Teach for America, education management
organizations and venture capital-funded charter schools have not
garnered much grassroots support or enthusiasm among lower- and
middle-income black parents whose children attend urban schools
because these parents often view these schemes as uninformed by their community and disconnected from the best interest of their children.
In the most recent cases of Washington D.C. and Chicago, black parents
and other community members point to school closings as verification
of their distrust of school “reform” efforts. Indeed, mayoral control has
been linked to an emerging pattern of closing and disinvesting in schools
that serve black poor students and reopening them as charters operated by education management organizations and backed by venture capitalists.
While mayoral control proposes to expand educational opportunities for
black and poor students, more-often-than-not new schools are placed in
upper-income, gentrifying white areas of town, while more schools are 
closed and fewer new schools are opened in lower-income, black areas
thus increasing the level of educational inequity. Black inner-city residents
are suspicious of school reform (particularly when it is attached to neighborhood revitalization) which they view as an imposition from
external white elites who are exclusively committed to using schools to recalculate urban land values at the expense of black children, parents and communities.

So, what is the answer to improving schools for black children? Elected officials must advocate for equalizing state funding formula so that urban school districts garner more financial resources to hire credentialed and committed teachers and stabilize principal and superintendent leadership. Funding makes a difference. Black students who attend schools where 50 percent of more of the children are on free/reduced lunch are 70 percent
more likely to have an uncertified teacher (or one without a college major
or minor in the subject area) teaching them four subjects: math, science,
social studies and English. How can the nation continue to raise the bar
on what we expect students to know and demonstrate on standardized tests
and lower the bar on who teaches them?

As the nation’s inner cities are dotted with coffee shop chains, boutique furniture stores, and the skyline changes from public housing to high-rise condominium buildings, listen to the refrain about school reform sung by
some intimidated elected officials and submissive superintendents. That refrain is really about exporting the urban poor, reclaiming inner city
land, and using schools to recalculate urban land value. This kind of
school reform is not about children, it’s about the business elite gaining
access to the nearly $600 billion that supports the nation’s public schools.
 It’s about money.

Sunday, May 26, 2013

Is Hybrid Learning Disruptive?

Is K–12 blended learning disruptive?
An introduction of the theory of hybrids

May 2013
EXECUTIVE SUMMARY
The Clayton Christensen Institute, formerly Innosight Institute, has published three papers describing the rise of K−12 blended learning—that is, formal education programs that combine online learning and brick-and-mortar schools. This fourth paper is the first to analyze blended learning through the lens of disruptive innovation theory to help people anticipate and plan for the likely effects of blended learning on the classrooms of today and schools of tomorrow. The paper includes the following sections:
Introduction to sustaining and disruptive innovationThere are two basic types of innovation—sustaining and disruptive—that follow different trajectories and lead to different results. Sustaining innovations help leading, or incumbent, organizations make better products or services that can often be sold for better profits to their best customers. They serve existing customers according to the original definition of performance— that is, according to the way the market has historically defined what’s good. A common misreading of the theory of disruptive innovation is that disruptive innovations are good and sustaining innovations are bad. This is false. Sustaining innovations are vital to a healthy and robust sector, as organizations strive to make better products or deliver better services to their best customers.
Disruptive innovations, in contrast, do not try to bring better products to existing customers in established markets. Instead, they offer a new definition of what’s good—typically they are simpler, more convenient, and less expensive products that appeal to new or less demanding customers. Over time, they improve enough to intersect with the needs of more demanding customers, thereby tranforming a sector. Examples in the paper from several industries demonstrate the classic patterns of both types of innovation.
Theory of hybridsOften industries experience a hybrid stage when they are in the middle of a disruptive transformation. A hybrid is a combination of the new, disruptive technology with the old technology and represents a sustaining innovation relative to the old technology. For example, the automobile industry has developed several hybrid cars along its way to transitioning from gasoline-fueled engines to engines with alternative power sources. The leading companies want the virtues of both, so they have developed a sustaining innovation—hybrid cars that use both gasoline and electricity. Other industries—including earth excavators, steamships, photography, retail, and banking—have experienced a hybrid stage on their way to realizing the pure disruption. Industries create hybrids for predictable reasons, including because the business case for the purely disruptive technology is not compelling at first to industry leaders, whereas implementing a hybrid as a sustaining innovation allows incumbents to satisfy their best customers.
How to spot a hybridHybrid innovations follow a distinct pattern. These are four characteristics of a hybrid:
  1. It includes both the old and new technology, whereas a pure disruption does not offer the old technology in its full form.
  2. It targets existing customers, rather than nonconsumers—that is, those whose alternative to using the new technology is nothing at all.
  3. It tries to do the job of the preexisting technology. As a result, the performance hurdle required to delight the existing customers is quite high because the hybrid must do the job at least as well as the incumbent product on its own, as judged by the original definition of performance. In contrast, companies that succeed at disruptive innovations generally take the capabilities of the new technology as a given and look for markets that will accept the new definition of what’s good.
  4. It tends to be less “foolproof ” than a disruptive innovation. It does not significantly reduce the level of wealth and/or expertise needed to purchase and operate it.
Importantly, where there is no nonconsumption in a market, a hybrid solution is the only viable option for a new technology that underperforms the old based on the original definition of performance. That means that in markets with full consumption, hybrid innovations tend to dominate instead of pure disruptions.
Hybrid Zone (1),jpgHybrid models of blended learningIn many schools, blended learning is emerging as a hybrid innovation that is a sustaining innovation relative to the traditional classroom. This hybrid form is an attempt to deliver “the best of both worlds”—that is, the advantages of online learning combined with all the benefits of the traditional classroom. In contrast, other models of blended learning appear disruptive relative to the traditional classroom. They do not include the traditional classroom in its full form; they often get their start among nonconsumers; they offer benefits that accord to a new definition of what’s good; and they tend to be more foolproof to purchase and operate.
In terms of the emerging blended-learning taxonomy, the Station Rotation, Lab Rotation, and Flipped Classroom models are following the pattern of sustaining hybrid innovations. They incorporate the main features of both the traditional classroom and online learning. The Flex, A La Carte,* Enriched Virtual, and Individual Rotation models, in contrast, are developing more disruptively relative to the traditional system.
Seeing what’s next with blended learningThe models of blended learning that follow the hybrid pattern are on a sustaining trajectory relative to the traditional classroom. They are poised to build upon and offer sustaining enhancements to the factory-based classroom system, but not disrupt it. The models that are more disruptive, however, are positioned to transform the classroom model and become the engines of change over the longer term, particularly at the secondary level. Any hybrid variety of blended learning is likely to fall by the wayside as the pure disruption becomes good enough.
When this happens, the fundamental role of brick-and-mortar schools will pivot. Schools will focus more, for example, on providing well-kept facilities that students want to attend with great face-to-face support, high-quality meals, and a range of athletic, musical, and artistic programs and will leverage the Internet for instruction.
Although traditional and hybrid classrooms are poised for disruption, we do not see brick- and-mortar schools falling by the wayside any time soon. This is because although many areas of nonconsumption exist at the classroom level—particularly in secondary schools—little nonconsumption exists at the school level in the United States. Almost every student has access to a government-funded school of some sort. We predict that hybrid schools, which combine existing schools with new classroom models, will be the dominant model of schooling in the United States in the future. But within secondary schools, the disruptive models of blended learning will substantially replace traditional classrooms over the long term. In the paper, we conclude that the models that are more disruptive—Flex, A La Carte, Enriched Virtual, and Individual Rotation—are positioned to transform the classroom model and become the engines of change over the longer term in high school and middle school, but likely not in elementary school.
Implications for education leadersEducation leaders can use the disruptive innovation lens to anticipate the effects of their efforts. Strategies that sustain the traditional model could benefit students for years to come. This path is the best fit for most classroom teachers, school leaders who have limited budgetary or architectural control over their schools, and those who want to improve upon the classrooms in which most students receive their formal education today. Other strategies that accelerate the deployment of disruptive blended-learning models will have a greater impact on replacing the classroom with a student-centric design. This path is a viable fit for school principals—often in charters but also within districts, especially in those that have moved to portfolio models— that have some autonomy with respect to budget and school architecture. Furthermore, district leaders with authority to contract with online providers, state policy leaders, philanthropists, and entrepreneurs all are in the position to play a role in bolstering disruptive innovation.
Education leaders can foster disruptive innovation in several ways, including by following these five steps:
  1. Create a team within the school that is autonomous from all aspects of the traditional classroom.
  2. Focus disruptive blended-learning models initially on areas of nonconsumption.
  3. When ready to expand beyond areas of nonconsumption, look for the students with less demanding performance requirements.
  4. Commit to protecting the fledgling disruptive project.
  5. Push innovation-friendly policy.
In the long term, the disruptive models of blended learning are on a path to becoming good enough to entice mainstream students from the existing system into the disruptive one in secondary schools. They introduce new benefits—or value propositions—that focus on providing individualization; universal access and equity; and productivity. Over time, as the disruptive models of blended learning improve, these new value propositions will be powerful enough to prevail over those of the traditional classroom.

Wednesday, May 22, 2013

Jay Michaelson: The Murder of Mark Carson is a Wake-up Call For Gay Rights


Mission Not Accomplished: The Anti-Gay Murder of Mark Carson Should Be a Wake-Up Call


Posted: 05/19/2013 8:03 pm

United Nations' Convention Against Corruption: The First Three Years


The First Three Years of the UNCAC Review Process: A Civil Society Perspective

16 May 2013.
This report by Transparency International (TI) and the UNCAC Coalition is about the experience of civil society organisations (CSOs) in the first three years of the UNCAC review process. It is the third such report and covers some of our activities and contributions under Resolution 4/1 on the Review Mechanism. It has a particular focus on the transparency of the process and the opportunities for civil society participation and is intended to contribute to discussions of the Implementation Review Group (IRG).

Background

The Terms of Reference for the UNCAC Review Mechanism and Guidelines for the review process were adopted by the UNCAC Conference of States Parties in November 2009. They encourage States Parties under review to involve civil society organisations (CSOs) in country self- assessments and country visits. They require publication of an Executive Summary of the review report but not of the full report. The current first 5-year cycle of review covers Criminalisation and Enforcement (UNCAC chapters III and IV) started in mid-2010.
The information presented in this report is based on a survey of the review process in 83 of the 104 countries in the first three years of review (see Annex). A survey questionnaire was sent to UNCAC Coalition CSOs supporting anti-corruption efforts in their countries and tables reflecting their responses are included in an annex to this report.
It should be noted that that full information is not yet available about the results of some of the countries surveyed, particularly for those in the third year of review. Consequently, the transparency and participation results reported here may improve in the future.
The complete information about the UNCAC review process is held by the United Nations Office on Drugs and Crime (UNODC).

Key Findings

Positive results

The UNCAC review process has been making steady progress thanks to the commendable efforts of the UNODC, with the active participation of States Parties.
CSOs in most of the countries surveyed reported that their governments had opted for country visits by review teams. (75% of the 83 countries surveyed.) The number may rise when more information is known about reviews. Where they were aware of country visits, almost three-quarters of the CSOs (71%) said that at least one CSO was invited to meet a review team. Again numbers may rise.) In those cases, review teams benefited from CSOs experience, expertise and analysis and from views other than those of the government. The involvement of CSOs also contributed to raising public awareness and understanding of the review process.

Areas of concern

However, CSOs also confronted obstacles to their participation in the review process and to accessing its outputs, which has reduced the effectiveness of the process. Some of the obstacles are described below. In addition, it is a matter of concern that near the end of the third year of the review process only 34 reports and Executive Summaries have been completed.
  1. Lack of CSO opportunity to meet review teams in some countries
    CSOs in 25% of the countries surveyed reported that there was no country visit, so that CSOs could not meet with the review team. Additionally, in some of the countries where there were country visits, CSOs reported that they were not given the opportunity to meet with the review teams.
  2. Low CSO involvement in self-assessments
    In only about one-third of the countries surveyed (34%) did CSOs report that they were invited to contribute to the country self-assessments, despite the encouragement in the review guidelines. This means that opportunities for dialogue about country performance have been missed. It is assumed that the self-assessment phase has been completed in most third year countries.
  3. Lack of information about timetables and focal points
    In more than a third of the surveyed countries (39%), CSOs reported difficulties accessing information about the review process (such as information about country focal points). This hampered their ability to contribute. The delays in many country review processes have also created uncertainty about whether and when CSOs could contribute.
  4. Lack of access to the review process outputs
    Only an Executive Summary is available for most countries for which the reviews have been completed. These contain concise and useful information, but compared with available full reports, these summaries lack important information about how the review process was conducted and about its findings. The full reports are vital for overall public understanding of country successes and challenges.
    Ten countries have so far authorised UNODC to publish their self-assessments on the UNODC website and eight have authorised publication of their full review reports (Brunei Darussalam, Chile, Finland, France, Georgia, South Africa, Switzerland and the United Kingdom). Some countries may have published the review outputs on government websites but there is no readily available information about that.
    On the UNODC website, the outputs of the review process can now be accessed on the very useful “country profiles” pages. However, there is no clarity about when review outputs for a given country will be posted. To get an overview of all self-assessments and completed country review reports at any given point in time it is necessary to check country profiles for all countries under review.
  5. Insufficient data on enforcement efforts
    In 13 of 17 countries where Coalition CSOs prepared parallel reports, the CSOs reported difficulties in accessing enforcement data and case information in order to assess government enforcement efforts in practice. Some of this valuable information is included in the full review reports but not in the Executive Summaries.
  6. Lack of follow-up process
    Some CSOs reported that the lack of a process for following up on review recommendations resulted in a lack of momentum for implementation.

Experience with other anti-corruption review processes

Some CSOs that had participated in review processes for other anti-corruption conventions, such as those for the OAS Convention (Organisation of American States) and OECD Convention (Organisation for Economic Cooperation and Development) as well as the Council of Europe GRECO (Group of States against Corruption) review process, reported that their experience with UNCAC reviews compared unfavourably in some respects with that in the other review processes. The other processes provide some examples of good practice in how they involve CSOs in the review process, in the online information made available and in the practice of issuing media releases on completion of country reviews with highlights of the findings. It should be recognised, however, that the UNCAC review process is more complicated in terms of the number of countries involved and the scope of the articles under review.

Recommendations

Transparency International and the UNCAC Coalition have developed several proposals for enhancing the transparency and inclusiveness of the UNCAC review process.
  • Publish more information in an accessible location on the UNODC website and on government websites. This should include:
    • timely information about the process (such as information about focal point and schedule), including updates when changes are made;
    • the country’s self-assessment;
    • the full final review report;
    • aggregated information on the UNODC website about country reviews and outputs.
  • Ensure credible and participatory country reviews. This should include the following steps for governments:
    • consulting with relevant CSOs and other stakeholders on the self-assessment, to take advantage of their expertise and their interest;
    • arranging a country visit for the review team, to ensure quality reviews; and
    • inviting civil society representatives and other stakeholders to meet with country review teams and also to make written inputs
  • Include CSOs and other stakeholders in discussions of technical assistance needs.
    Through multi-stakeholder discussions, governments can benefit from support for their anti- corruption efforts. One priority area for assistance is in the collection and publication of enforcement statistics and judgments or outcomes of proceedings. Improvements in this area will help ensure a sound basis for decision-making and public debate.
  • Establish a follow-up process to address review recommendations.
    Governments should announce steps taken and enlist stakeholders in the follow-up process. A follow-up process will help ensure that the findings of the reviews are given priority and that momentum for UNCAC implementation is maintained.
  • Establish a transparent, inclusive and adequately funded 2nd cycle of the UNCAC review process.
    The 5th COSP should adopt a specific timetable for the start of the 2nd cycle, including steps to be taken in the preparation process. The 2nd cycle should call for country visits, participation of civil society and other stakeholders in the review process and publication of the full country reports, the lists of focal points, and updated individual country review timetables. There should be stakeholder consultations as part of the preparation process for the 2nd cycle including participation in the Working Groups on Prevention and Asset Recovery.
    • Note: The full annex to this report contains three tables, one each for the first, second and third years of the review process. This report was submitted to the UNCAC Implementation Review Group with only the third table due to a word limit. The report with all three tables can be found here.
      » View the full report here


      Background of the United Nations Convention against Corruption
      LINK
      In its resolution 55/61 of 4 December 2000, the General Assembly recognized that an effective international legal instrument against corruption, independent of the United Nations Convention against Transnational Organized Crime (resolution 55/25, annex I) was desirable and decided to establish an ad hoc committee for the negotiation of such an instrument in Vienna at the headquarters of the United Nations Office on Drugs and Crime.
      The text of the United Nations Convention against Corruption was negotiated during seven sessions of the Ad Hoc Committee for the Negotiation of the Convention against Corruption, held between 21 January 2002 and 1 October 2003.

      The Convention approved by the Ad Hoc Committee was adopted by the General Assembly by resolution 58/4 of 31 October 2003. The General Assembly, in its resolution 57/169 of 18 December 2002, accepted the offer of the Government of Mexico to host a high-level political signing conference in Merida for the purpose of signing the United Nations Convention against Corruption.
      In accordance with article 68 (1) of resolution 58/4, the United Nations Convention against Corruption entered into force on 14 December 2005. A Conference of the States Parties is established to review implementation and facilitate activities required by the Convention.



Sunday, May 19, 2013

Carol Burris: Why The NY VAM Measure of High School Principals is Flawed


By , 
The New York State Education Department has been working on creating a VAM measure of high school principals to be used this year, even though its parameters have not been shared with those who will be evaluated.  It was just introduced to the Board of Regents this month.
Below is a letter that I sent to the Regents expressing my concerns.  Thanks to Kevin Casey, the Executive Director of SAANYs, Dr. Jack Bierwirth the Superintendent of Herricks, and fellow high school principals Paul Gasparini  and Harry Leonardatos for their review and input.
May 18, 2013
Dear Members of the Board of Regents:
It is mid-May and the school year is nearly over.  High school principals, however, have yet to be informed about what will comprise our VAM score, which will be 25% of our APPR evaluation this year. A Powerpoint presentation was recently posted on the State Education Department website, following the April meeting of the Regents.  The very few ‘illustrative’ slides relevant to our value-added measure do not provide sufficient detail regarding how scores will be derived, or information regarding the validity or reliability of the model that will be used to evaluate our work. The slides also do not answer the most important question of all—what specifically does VAM evaluate about a principal’s work?
Upon seeing the slides, I contacted SAANYs and they provided additional information.  What I received raised more doubts regarding the validity, reliability and fairness of the measure. I will be most interested to read the BETA report when it becomes available. More important, it is apparent that like the 3-8 evaluation system, there may be unintended consequences for students and for schools.
Construct validity is the degree to which a measurement instrument, in this case the VAM score, actually measures what it purports to measure.   The measure, therefore, should isolate the effect of the high school principal on student learning, to the exclusion of other factors that might influence student achievement.  Because this model does not appear in any of the research on principal effectiveness, we do not know if it indeed isolates the influence of a high school principal on the chosen measures outside of the context of factors such as setting, funding, Board of Education policies and the years of service (and therefore influence) of the individual principal.
Simply because AIR can produce a bell curve on which to place high school leaders, it does not follow that the respective position of any principal on that curve is meaningful. That is because the individual components of the measure, which I discuss below, are highly problematic.
The First Component—ELA/Algebra Growth
The first proposed measure compares student scores on seventh and eighth grade tests against scores on two Regents exams—the Integrated Algebra Regents and the English Language Arts Regents.  It is a misnomer to call it a growth measure.  The Integrated Algebra Regents, which is taken by students between Grades 7-12, is a very different test than the seventh and eighth grade math tests.  It is an end or course exam, not one that shows continuous growth in skills.  Because it is a graduation requirement, some students take it several times prior to passing.
Because many students take the Integrated Algebra Regents in Grade 8, the amount of data points with which to compare principals will also vary widely across the state.  For example, if you were to use the Integrated Algebra scores of my ninth-grade students this year, you would have 14 scores of the weakest math students in the ninth-grade class. That is because about 250 ninth-graders passed the test in Grade 8.  You would have a few more scores if you included 10th-12th graders who have yet to pass the test. These scores would be the scores of ELL students who are recent arrivals, students who transferred in from out of state or from other districts, students with severe learning disabilities, or students with attendance issues. In many of these cases, there would be no middle-school scores for comparison purposes.
At the end of the day, perhaps there would be 20 scores in the possible pool.  How is that a defensible partial measure of the effectiveness of a principal of nearly 1200 students?  There are other schools that universally accelerate all students in Grade 8, and still others who accelerate not all, but nearly all eighth-graders.  There are still other schools that give the Algebra Regents to some students in Grade 7, thus further complicating the problem.
The second measure that comprises the Math/ELA growth measure compares similar students’ performance on the eighth-grade ELA exam and the ELA Regents.   Some schools give 11th graders that test in January and others in June.  That means that principal ‘effectiveness’ will be, as in the case of Algebra, compared using different exams at different times of the year.  The ‘growth’ in English Language Arts skills takes place over the course of three years in Grades 9, 10, 11.    Therefore, any principal who has been in her school for less than three years has only proportional influence on the scores.
The Second Component—The Growth in Regents Exams Passed
The second component of principal effectiveness counts the numbers of Regents examinations passed in a given year, comparing the progress of similar students.   This is a novel concept, but again there is no research that demonstrates that it has any relationship to principal effectiveness, and like the first measure, it is highly problematic.
First, not all Regents exams are similar in difficulty, although they are counted equally in the proposed model. There are 11th graders who take the Earth Science Regents, a basic science exam of low difficulty, and others who take the Physics Regents, which the state correctly categorizes as an advanced science exam. Both groups of students may have accrued the same number of Regents exams (5) and have similar middle-school scores (thus meeting the test of ‘similar student’), but certainly Earth Science would be far easier to pass.  Yet for each exam, the principal gets (or does not get) a comparative point.
And what of schools that are unusual in their course of studies?  Scarsdale High School only offers 6 Regents exams, choosing instead to have its students take rigorous tests based on Singapore Math. It also gives its own challenging physics exam in lieu of the Regents.  Will the principal of Scarsdale High School be scored ineffective because he cannot keep up with the count with his high performing students?  Or will he be advantaged in the upper grades when his high performing students are now compared to students with low Regents counts who frequently failed exams, thus disadvantaging the principals of schools serving less affluent populations?
The Ardsley School District double accelerates a group of students in mathematics.  Some students enter their high school having passed 3 Regents exams—two in mathematics and one in science. Who will be the ‘similar students’ for these ninth-graders?  How will the principals of portfolio schools, which only give the English Regents, receive a score?  Is the system fair to principals of schools with no summer school program, thus giving students fewer opportunities to pass the exam? How will a VAM score be generated for principals of BOCES high schools who give few or no Regents exams?  Will those Regents exams, taken at BOCES, reflect the score of the home school principal who has absolutely no influence on instruction, or the BOCES principal? The scores are presently reported from the home school.
The Board of Regents allows a score of 55 to serve as a passing score for students with disabilities. How will this measure affect the principals of schools with large numbers of special education students, especially those schools who have, as their mission, the improvement of the emotional health of the student, rather than the attainment of a score of 65?
The Unintended Consequences of Implementation
All of the above bring into question the incentives and disincentives that will be created by the system.  This is the most important consideration of all, because the unintended consequences of change affect our students.  Will this point system incentivize principals to encourage students to take less challenging, easier-to-pass science Regents rather than the advanced sciences of chemistry and physics?  Will schools such as Scarsdale High School and portfolio schools abandon their unusual curricula from which we all can learn, in order to protect their principals from ineffective and developing scores?
Will principals find themselves in conflict with parents who want their children to attend BOCES programs in the arts and in career tech, rather than continue the study of advanced mathematics and science that are rewarded by the system?  Will we find that in some cases, it is in a principal’s interest that students take fewer exams so that they are compared with lower performing ‘similar’ students?  What will happen to rigorous instruction when simply passing the test is rewarded?  Will special education students be pressured to repeatedly take exams beyond what is in their best interest in order to achieve a 65 for ‘the count’? No ethical measure of performance should put the best interests of students in possible conflict with the best interests of the adults who serve them.
Most important of all, how will this affect the quality of leadership of our most at-risk schools, where principals work with greater stress and fewer supports?  School improvement is difficult work, especially when it involves working with high needs students.  This model does not control for teacher effects, therefore it is in fact a crude measure of both teacher and principal effects.   If the leadership of the school is removed due to an ineffective VAM score, who will want to step in, knowing that receiving an ineffective score the following year is nearly inevitable?
Why would a new principal who receives a developing score want to risk staying in a school in need of strong leadership, knowing that it will take several years before they can achieve substantial improvement on any of these measures?  The response that VAM is only a partial measure of effectiveness is hollow.  An additional 15% is based on student achievement, and the majority of composite points are in the ineffective category, deliberately designed so that ‘ineffective’ in the first two categories assures an ineffective rating overall.
We frequently see the unintended consequences of changes in New York State education policy.  The press recently noted a drop in the Regents Diploma with Advanced Designation rate, which resulted from the decision to eliminate the August Advanced Algebra/Trigonometry and Chemistry Regents.  The use of the four-year graduation rate as a high-stakes measure has resulted in the proliferation of ‘credit recovery’ programs of dubious quality along with teacher complaints of being pressured to pass students with poor attendance and grades, especially in schools under pressure to improve.  These are but two obvious examples of the unintended consequences of policy decisions. The actions that you take and the measures that you use in a high-stakes manner greatly affect our students, often in negative ways that were never intended.
You have before you an opportunity to show courageous leadership. You can acknowledge with candor that the VAM models for teachers and principals developed by the department and AIR are not ready to be used because they have not met the high standards of validity, reliability and fairness that you require.  You can acknowledge that even if they were perfect measures, the unintended consequences from using them make them unacceptable. Or, you can favor form over substance, allowing the consequences that come from rushed models to occur.  You can raise every bar and continue to load on change and additional measures, or you can acknowledge and support the truth that school improvement takes time, capacity building, professional development and district and state support.
I hope that you will seize this moment to pause, ask important questions, provide transparency to stakeholders and seek their input before rushing yet another evaluation system into place. Creating a bell curve of relative performance may look like progress and science, but it is a measure without meaning that will not help schools improve.  Thank you for considering these concerns.
Sincerely,
Carol Burris, Ed. D.
Principal of South Side High School