A notional model for evaluating public diplomacy

The U.S. Advisory Commission on Public Diplomacy met last week to discuss its biennial report to appraise U.S. Government activities intended to understand, inform, and influence foreign publics. In 2008, the Commission come out with a report on the human resource aspect of public diplomacy. This time, the Commission outsourced its commitment to the Lyndon Baines Johnson School of Public Affairs at the University of Texas, Austin. The project’s purpose was to review current public diplomacy measurement methods, assess gaps in the various measurement methods, and develop a comprehensive measurement framework. The result was the Public Diplomacy Model for the Assessment of Performance (PD-MAP).

Links to the report and presentation are at the end of this article.

The effort by the LBJ School took the form of a two-semester policy research project involving 15 graduate students and one professor. The team reviewed current programs, surveyed public diplomacy professionals and academics, convened a focus group, and interviewed several expert speakers.

The result was a report and a “notional model for measuring public diplomacy efforts.” The LBJ School describes PD-MAP as a “flexible framework that allows an evaluator to quantify the results of public diplomacy programs and evaluate their success in meeting” what the team identified as the “three strategic goals or outcomes of all public diplomacy programs”:

  1. Increasing understanding of US policy and culture
  2. Increasing favorable opinion towards the US
  3. Increasing the US’s influence in the world

Three themes were clear in both the presentation of the report and the report itself: the effort by the LBJ School was constrained by time, funding, and access. On the latter, they said “limited access to the Department of State personnel within Washington, D.C. and in the field” made it “difficult to survey professionals, collect data, receive feedback, or even study the current measurement tools that were out there.” There was, however, “support and guidance” from the Office of the Under Secretary of Public Diplomacy and Public Affairs, including the Evaluation and Measurement Unit (EMU) and the ECA Office of Policy and Evaluation. On the time and funding, Work began in August 2009, at the start of the 2009-2010 academic year, but funding did not arrive until March 2010. Presumably, much of the work was completed by May 2010, the end of their academic year. It was not clear whether the time constraint was complicated by other course work carried by the graduate students.

The report and the model reflect the sincerity and hard work of the students. Theirs was not an easy task. However, the value and utility of their year-long effort is unclear. The PD-MAP arrived at conclusions that are painfully obvious to anyone who scratches the surface of public diplomacy, let alone in the area of measuring effectiveness:

  • No coordination between PD/PA departments
  • Duplication of evaluation efforts
  • No uniform scale or basis for analyzing or comparing different programs
  • No single department coordinates or is held responsible for measurement standards
  • Insufficient relationship between program planning and evaluation

(As any reader of this blog will know, I have my reasons why PD and PA should be coordinated. I offer a not-so-subtle reminder they lack the coordination whenever I write the title or office of Judith McHale, or any of her predecessors: “and Public Affairs” is always italicized. The LBJ team never gives a reason for their recommendation, however.)

The team’s research appeared to be shallow. For example, it is unclear whether the team considered any role for the State Department’s Bureau of Intelligence and Research (INR) that reports directly to the Secretary of State, or the audience research work of the Broadcasting Board of Governors. Further, it appears that, in the absence of any discussion about the geographic bureaus or any other aspect of the diffusion of public diplomacy responsibilities across the department, the team’s aperture was unnecessarily narrow and reflected their limited experience and exposure to the issues they were investigating, limits on interviewing experts, and constraints on time. While the school may have been selected on a proven track record of public policy analysis, their lack of awareness of public diplomacy policy, practices and history came through in their methodology, survey and report.

The team used four methods to collect information to build develop the framework for the model. These were: review of current public diplomacy programs; survey of public diplomacy professionals and academia; a focus group; and, expert speakers. On the surface this appears adequate, but a closer inspection shows at least the last three methods fell far short of what should have been expected.

The team built a sample of 11 Diplomats-in-Residence at various institutions, 32 Foreign Service Officers, 4 current USAID professionals, 14 former ambassadors, and 26 academics. Barely half (55%) of the State Department members responded. The response rate for academia – select professors who are members of the Association of Professional Schools of International Affairs – “was considerably lower due to logistical difficulties.”

Only 14 complete responses to the survey were collected, and 1 of 13 partial responses was included (the other 12 were “deleted”).

While the team lamented the shortage of “funds,” “time” and “access” to conduct an adequate investigation, I have to believe that they could have done a better job reaching out to public diplomacy professionals, past and present, and academics than they did. Despite the team’s concern over “bias,” I would guess that American University’s recent on cultural diplomacy collected more than 15 useable responses. I’d be surprised if the survey sent out by Carolijn van Noort, a trainee at the Consulate General of the Netherlands in San Francisco, on assessing professional views about the importance of social networking in public diplomacy collected only 15 useable responses as well. I’d also guess USC’s Center on Public Diplomacy could have been of help as well.

Another “method” of collecting information was a “focus group.” This should really have been labeled a strategy meeting as two of the three participants in the focus group were the executive director and the deputy director of the client, the Advisory Commission. The third interviewee was the Diplomat-in-Residence at the LBJ School.

Still, the graduate students should still be commended. They had embarked on a daunting task: creating the Philosopher’s Stone for public diplomacy. A major challenge is attempting to quantify the unquantifiable.

The issue of complex environments, that programs do not happen in a vacuum, received a cursory examination in the report. In presenting the report, the team punted a question on this from public diplomacy office, suggesting moving outside of the tool and “capture the context in which your efforts are taking place…in a report, up the chain.”

The report did uncover some interesting “key themes” during their research, interviews, and survey. If these results are not the result of a defective sample or defective data collection, they should raise some flags. For example:

  • 62% percent of the respondents mentioned disseminating information on US foreign
  • policy and goals as one of the purposes of public diplomacy.
  • 24% percent of respondents mentioned increasing understanding regarding US
  • foreign policy and goals as one of the purposes of public diplomacy.
  • 43% percent of the respondents identified influencing foreign audiences to comply
  • with US foreign policy and goals as one of the purposes of public diplomacy. [emphasis mine]

Also, a question on the survey asked “Drawing on your experience in the public diplomacy field; list some short term (less than one year) goals of public diplomacy efforts.” Three of the approximately forty answers to the open-ended question were “Recruit more PD officers with 4/4 or higher language skills,” “Re-create USIA; separate the formal PD function from State,” and “Sell a particular weapons system.” These aren’t goals of public diplomacy.

The LBJ School naturally recommends their PD-MAP be rolled into production with the EMU.

Unfortunately, the good-intentions of the LBJ School will probably amount to very little. It is unclear how useful their “notional model” is and their analysis of the problem will, at best, be a supplement to the recent GAO report Engaging Foreign Audiences: Assessment of Public Diplomacy Platforms Could Help Improve State Department Plans to Expand Engagement, written for the House Foreign Affairs Committee. (Though the GAO report was released just prior to the July meeting of the Advisory Commission to discuss evaluation tools, it does not appear in the LBJ School report, perhaps because the semester ended two months prior.)

The time spent developing the tool surely benef ited the students but I will be surprised if it provides anything more than a marginal benefit to EMU. The analysis could have been done as an interim report by any one of the other universities already invested in public diplomacy, such as USC, George Washington, Georgetown, American University, Syracuse, Harvard, and Arizona State.

It is time the Advisory Commission begins to really tackle the challenges of public diplomacy and global engagement, not just in the Office of the Under Secretary for Public Diplomacy and Public Affairs, but across the State Department, into USAID, the Broadcasting Board of Governors and the rest of government, as well as into the public-private divide. Satisfying the minimum requirement of a report every two years is simply inadequate, let alone a report of such marginal value as this one on measurements. The Advisory Commission, a entity established by the Smith-Mundt Act of 1948 and whose members are appointed by the President and confirmed by the Senate, must begin fulfilling its mandate of issuing serious and substantive appraisals of U.S. Government activities intended to understand, inform, and influence foreign publics. There is no question of the need for such an oversight body to inform Congress, the White House and the public (a constituent of the Commission since its inception).

This was a missed opportunity for the Advisory Commission, and public diplomacy in general. It was, however, a great opportunity for the graduate students.

Download links:

 

5 thoughts on “A notional model for evaluating public diplomacy

  1. Matt, thanks for the insight and analysis of this important topic. We are struggling with the challenge of assessing Information Operations and “associated capabilities” here at the command. The issue is larger than one government department; for example, Treasury designations are also communications tools. My feeling is that academia and the private sector are the places to go for the intellectual rigor needed to move the USG assessment effort forward. Any suggestions or comments are greatly appreciated.

  2. Matt,A lot of food for thought–and hopefully a springboard to action–especially regarding research approaches to evaluation of PD policy and programs. (If interested, please see my October posting on the Public Diplomacy Council’s Facebook wall.)
    Debbie Trent
    Doctoral student, The George Washington University

Comments are closed.