OFFICE OF
THE INSPECTOR GENERAL

SOCIAL SECURITY ADMINISTRATION

PERFORMANCE INDICATOR AUDIT:
OVERALL SERVICE RATING

October 2005

A-15-05-15118

AUDIT REPORT


Mission

We improve SSA programs and operations and protect them against fraud, waste, and abuse by conducting independent and objective audits, evaluations, and investigations. We provide timely, useful, and reliable information and advice to Administration officials, the Congress, and the public.

Authority

The Inspector General Act created independent audit and investigative units, called the Office of Inspector General (OIG). The mission of the OIG, as spelled out in the Act, is to:

Conduct and supervise independent and objective audits and investigations relating to agency programs and operations.
Promote economy, effectiveness, and efficiency within the agency.
Prevent and detect fraud, waste, and abuse in agency programs and operations.
Review and make recommendations regarding existing and proposed legislation and regulations relating to agency programs and operations.
Keep the agency head and the Congress fully and currently informed of problems in agency programs and operations.

To ensure objectivity, the IG Act empowers the IG with:

Independence to determine what reviews to perform.
Access to all information necessary for the reviews.
Authority to publish findings and recommendations based on the reviews.

Vision

By conducting independent and objective audits, investigations, and evaluations, we are agents of positive change striving for continuous improvement in the Social Security Administration's programs, operations, and management and in our own office.

MEMORANDUM

Date: October 4, 2005

To: The Commissioner

From: Inspector General

Subject: Performance Indicator Audit: Overall Service Rating (A-15-05-15118)

We contracted with PricewaterhouseCoopers, LLP (PwC) to evaluate 16 of the Social Security Administration's (SSA) performance indicators established to comply with the Government Performance and Results Act. The attached final report presents the results of one of the performance indicators PwC reviewed. For the performance indicator included in this audit, PwC's objectives were to:
Assess the effectiveness of internal controls and test critical controls over the data generation, calculation, and reporting processes for the specific performance indicator.
Assess the overall reliability of the performance indicator's computer processed data. Data are reliable when they are complete, accurate, consistent and are not subject to inappropriate alteration.
Test the accuracy of results presented and disclosed in the Fiscal Year 2004 Performance and Accountability Report.
Assess if the performance indicator provides a meaningful measurement of the program it measures and the achievement of its stated objective.

This report contains the results of the audit for the following indicator:

Percent of people who do business with SSA rating overall services as "excellent," "very good," or "good."

Please provide within 60 days a corrective action plan that addresses each recommendation. If you wish to discuss the final report, please call me or have your staff contact Steven L. Schaeffer, Assistant Inspector General for Audit, at
(410) 965-9700.

Patrick P. O'Carroll, Jr.

MEMORANDUM

Date: September 15, 2005

To: Inspector General

From: PricewaterhouseCoopers, LLP

Subject: Performance Indicator Audit: Overall Service Rating (A-15-05-15118)

OBJECTIVE

The Government Performance and Results Act (GPRA) of 1993 requires the Social Security Administration (SSA) to develop performance indicators that assess the relevant service levels and outcomes of each program activity. GPRA also calls for a description of the means employed to verify and validate the measured values used to report on program performance.

Our audit was conducted in accordance with generally accepted government auditing standards for performance audits. For the performance indicator included in this audit, our objectives were to:

1. Assess the effectiveness of internal controls and test critical controls over the data generation, calculation, and reporting processes for the specific performance indicator.

2. Assess the overall reliability of the performance indicator's computer processed data. Data are reliable when they are complete, accurate, consistent and are not subject to inappropriate alteration.

3. Test the accuracy of results presented and disclosed in the Fiscal Year (FY) 2004 Performance and Accountability Report (PAR).

4. Assess if the performance indicator provides a meaningful measurement of the program it measures and the achievement of its stated objective.

BACKGROUND

We audited the following performance indicator as stated in the SSA FY 2004 PAR:

Performance Indicator FY 2004 Goal FY 2004 Reported Results Percent of people who do business with SSA rating the overall service as "excellent," "very good," or "good" 83% 84.2% SSA provides a range of services to the general public including the issuance of social security cards, and payment of retirement and long-term disability benefits. The general public has a variety of service options for obtaining information and conducting business with SSA. These options consist of customers visiting and calling local field and hearing offices, calling SSA's national toll-free 800 number, and using the website. The majority of SSA's customers prefer to conduct business by telephone, and many choose to deal with 1 of 1,336 local field offices and 140 hearing offices.

This performance indicator is linked to SSA's strategic objective to "Improve service with technology," which is linked to SSA's strategic goal "To deliver high-quality, citizen-centered Service." This strategic goal is linked to one of the five government-wide goals on the President's Management Agenda, "Expanded Electronic Government" which addresses all Government agencies' ability to simplify the delivery of high-quality service while reducing the cost of delivering those services.

To assess its progress in meeting this objective and goal, SSA's Office of Quality Assurance and Performance Assessment (OQA) conducts a series of tracking surveys to measure a customer's satisfaction with his or her last contact with SSA. SSA conducts three surveys: the 800-Number Caller Survey, the Field Office (FO) Caller Survey, and the Office Visitor Survey. OQA uses a 6-point rating scale ranging from "excellent" to "very poor." To report the final overall service satisfaction, OQA combines the three customer satisfaction surveys, weighting each survey by the customer universe it represents.

800-Number Caller Survey

The 800-Number Caller Survey is conducted with a sample of individuals who received customer service using the 800-number during the month of March. When a customer calls the toll free number, the Automatic Number Identifier (ANI) system collects data about the call, i.e., phone number, date, time, and duration. The sample is randomly drawn on a biweekly basis from data supplied by the telephone company over the course of the sample month. OQA excludes calls from blocked numbers, businesses, pay phones, or other locations where the customer is not identifiable. OQA contracts with a firm to call the customer, conduct the survey, and compile the responses. OQA then analyzes and reports the results.

Field Office Caller Survey

The FO Caller Survey is conducted with a sample of individuals who called 1 of 52 randomly selected FOs during the month of April. During the survey period, an OQA contractor installs caller-ID equipment in sampled FOs on its main incoming phone lines. Each FO's caller-ID system records the date and time of contact, length of call, and phone number of the caller. From the caller population accumulated, OQA makes a biweekly, random selection of callers over the course of the sample month. OQA excludes calls from blocked numbers, businesses, pay phones, or other locations where the customer is not identifiable. OQA contracts with a firm to call the customer, conduct the survey, and compile the responses. OQA then analyzes and reports the results.

Office Visitor Survey

The Office Visitor Survey is conducted with a sample of individuals who visit either 1 of 52 FOs or 13 Hearing Offices (HO) randomly selected over the course of an 8-week period from July-September. FOs are selected by region, with all 10 SSA regions represented. Each sampled office participates for 1 week of the sample period. The FO or HO records each customer's name, address, telephone number, and reason for the visit and forwards this information electronically to OQA daily. Every 2 to 3 days, OQA selects a random sample of customers to participate in the survey. A contractor mails the survey to the selected customers. Customers are asked to return the survey directly to OQA, which analyzes and reports the results.

For additional detail on the surveys and reporting process, refer to the flowcharts in Appendix C.

RESULTS OF REVIEW

Overall, we found that the statistical methods for sample selection and estimation used by SSA were consistent with sound statistical theory. We were able to recalculate and verify SSA's overall customer satisfaction estimated percentage using the weights provided by SSA.

SSA, however, did not have adequate internal controls in the documentation of processes and potential bias existed in the customer satisfaction survey's sample selection process, response rate and combination of survey instruments. Furthermore, SSA could improve the performance indicator's linkage to the strategic objective to comply with GPRA and Office of Management and Budget (OMB) guidance and could improve the meaningfulness of the performance indicator. Finally, the PAR did not accurately disclose the data trend and description of the data sources.

Internal Controls

Although we were able to recalculate and verify the results reported in the PAR, we found that SSA did not have sufficient and current documentation to enable an efficient review of the performance indicator results. We found several of the calculations that supported the statistical estimates were not completely and accurately documented. We further found several discrepancies within the documentation that were subsequently found to be immaterial or resolved by SSA.

In addition, SSA had used the previous Office of the Inspector General audit report (conducted for FY 2002) as documentation of the survey sampling methodology. The absence of "…well-defined documentation processes that contain an audit trail, verifiable results, and specify document retention periods so that someone not connected with the procedures can understand the assessment process" did not comply with standards defined by OMB Circular A-123, Management's Responsibility for Internal Control.

Also, in the sample selection process, a potential bias existed as a result of the surveys being conducted over a limited time period. SSA selected the samples from a limited time period (e.g. 1 or 2 months) and then projected the results to the entire year. The results from the time period not sampled may have been systematically different from the results of the time period sampled. SSA did not provide analysis to substantiate the similarities and differences between sampled and non-sampled time periods.

Additionally, in the survey response rate, a potential bias existed due to nonresponse. Nonresponse in surveys can often lead to biased results if respondents and nonrespondents have different responses on customer satisfaction. SSA could not provide documentation on an analysis of the response rate from the different surveys. Typically, organizations will periodically evaluate the effect of non-response, such as statistical tests between respondents and nonrespondents for characteristics believed to be correlated with customer satisfaction.

A potential measurement error existed by combining the three survey responses to reach an overall conclusion. Each survey asked the question as:

"Overall, how would you rate the service you received the day you called Social Security's 800 number?" (800 callers)
"Overall, how would you rate the telephone service you received that day?"
(FO callers)
"Overall, how would you rate Social Security's service for your recent office visit?" (FO/HO visitors).

We understand that the questions on the surveys were similar, but were modified to reflect the nature of the experience and reworded to properly phrase questions for each of the three different ways that customers receive service. Despite the same question being asked in each survey, differences in each survey instrument existed that include:

Questions that existed on one survey, but not the others; and
Different question ordering on each of the surveys.

Modifying the question order and wording on surveys are techniques often used to reduce order bias. For these surveys, however, we were informed that these differences were not designed to mitigate bias, but to gather additional information for each survey type. As a result, the differences in the wording of the questions, prior to the customer satisfaction question, may have led to potential bias in the combined survey response. Quantification of such potential bias would require a separate study. To minimize measurement errors, the survey designs should be as similar as possible.

Accuracy of PAR Presentation and Disclosure

We found the trend was not accurately described in the PAR. The trend description indicated that this was "…the third year in a row that the public's perception of SSA's service reflected a statistically significant improvement." The actual results, however, were similar in comparison to the prior year and did not represent a statistically significant improvement.

Additionally, the FY 2004 PAR did not accurately disclose the current description of the data source used to report the indicator results. In prior years, the "Interaction Tracking System" was used to describe the data accumulation process, however, SSA replaced this term with "Service Satisfaction Survey." In addition, the PAR did not disclose the underlying data sources for the three types of customer interactions (i.e., 800 number callers, FO callers, and FO/HO visitors) used to support the performance indicator calculation.

Performance Indicator Meaningfulness

The performance indicator did not clearly support SSA's strategic objective "Improve service with technology," although it does support the strategic goal "to deliver high-quality, citizen-centered service." The indicator measured the service satisfaction of SSA's customers who called the 800 number, called the FO, or visited a FO or HO. However, associated technology improvements, e.g. the use of the internet or investments in technology, were not identified. GPRA requires "…performance indicators [which are linked to strategic objectives and goals] to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity," and OMB's Performance Assessment Rating Tool (PART) guidance requires "…specific, easily understood program outcome goals that directly and meaningfully support the program's mission and purpose."
SSA can further improve the meaningfulness of the results reported for this performance indicator. The overall credibility of performance results would be enhanced by disclosing participation and results achieved in Governmentwide surveys on customer satisfaction that compare results across Federal Government agencies.

CONCLUSION AND RECOMMENDATIONS

We recommend SSA:

1. Improve documentation of policies and procedures. The documentation should be complete and accurate to define all of the steps ultimately used to report performance results in the PAR. Estimates and calculations used to generate the survey results should be clearly documented and supported to readily enable an independent assessment. SSA management should annually update and take ownership of the documentation in compliance with OMB A-123 guidance.

2. Periodically assess the impact of performing the surveys over limited time periods, and consider performing abbreviated surveys on smaller samples at other times during the year.

3. Consider the effect of nonresponse, through statistical tests between respondents and nonrespondents for characteristics believed to be correlated with customer satisfaction. We recognize that SSA has found it difficult to identify a comparable population to determine the effect of nonresponse, however, efforts should continue to be made to examine the characteristics of the respondents and representation of the population since segments of the population lacking coverage may be a result of nonresponse. Model-based techniques using information gathered on respondents may be helpful to assess the impact of nonresponse.

4. To minimize measurement errors, design each survey to be as similar as possible.

5. Implement controls to ensure the accuracy of the PAR's data sources disclosed to support the performance results.

6. Provide a direct linkage of the performance indicator to the Agency's strategic goals and objectives. SSA should expressly comply with criteria established by GPRA and OMB PART guidance, which requires performance indicators be linked to the Agency's relevant strategic goal and objective. If a direct linkage can not be supported, the Agency should disclose the basis for selecting the indicator and why it diverts from established criteria. SSA should further consider indicators which include satisfaction from internet users and other technology investments that support the strategic objective.

7. Provide a linkage to supplementary information in the PAR to increase the credibility of the survey results by comparison to other Federal agencies, e.g. the American Consumer Satisfaction Index.

AGENCY COMMENTS

SSA agreed with four recommendations (numbers 1, 4, 5 and 7), agreed in part with two recommendations (numbers 3 and 6), and disagreed with recommendation 2. SSA stated it agreed with recommendation 3; however, SSA stated there is no meaningful data available regarding characteristics of the sample population that could be correlated to customer satisfaction. SSA stated it agreed in part with recommendation 6, and agreed to enhance the linkage between the performance measures that relate to technology and the strategic objective. However, SSA believes that the reference to OMB PART should be removed, since the performance indicator was not a PART measure. For recommendation 2, SSA disagreed and stated that the benefit of conducting surveys more frequently is not sufficient to justify the time and expense to administer the surveys. The text of SSA's comments can be found in Appendix E.

PWC RESPONSE

In response to recommendation 2, we believe that if results begin to fluctuate, SSA should conduct surveys more frequently, and consider performing abbreviated surveys on smaller samples.

For recommendation 3, although the Agency agreed with this recommendation, it did state that there is no meaningful data available regarding characteristics of the sample population that could be correlated to customer satisfaction. We continue to recommend that SSA consider the effect of nonresponse for characteristics believed to be correlated with customer satisfaction.

In response to recommendation 6, SSA did agree to enhance the linkage between the performance measures that relate to technology and the strategic objective. However, we disagree that the reference to OMB PART guidance should be removed. OMB PART guidance was intended to improve the overall reporting of performance measurement data. As such, we continue to believe that the OMB PART guidance should be taken into consideration when developing the linkage of the Agency's performance indicators, and the corresponding strategic goals and objectives.

Appendices
APPENDIX A - Acronyms
APPENDIX B - Scope and Methodology
APPENDIX C - Process Flowcharts
APPENDIX D - Statistical Methodology
APPENDIX E - Agency Comments

Appendix A
Acronyms
ANI Automatic Number Identifier
FO Field Office
FY Fiscal Year
GI General Inquires
GPRA Government Performance and Results Act
HO Hearing Office
OMB Office of Management and Budget
OQA Office of Quality Assurance and Performance Assessment
OSM Office of Strategic Management
OSSAS Office of Statistics and Special Areas
OTSO Office of Telecommunications and Systems Operations
PAR Performance and Accountability Report
PART (OMB's) Performance Assessment Rating Tool
Pub. L. No. Public Law Number
SSA Social Security Administration
U.S.C. United States Code

Appendix B
Scope and Methodology
We updated our understanding of the Social Security Administration's (SSA) Government Performance and Results Act (GPRA) processes. This was completed through research and inquiry of SSA management. We also requested SSA to provide various documents regarding the specific programs being measured as well as the specific measurement used to assess the effectiveness and efficiency of the related program.

Through inquiry, observation, and other substantive testing, including testing of source documentation, we performed the following:

Reviewed prior SSA, Government Accountability Office, Office of the Inspector General and other reports related to SSA GPRA performance and related information systems.
Met with the appropriate SSA personnel to confirm our understanding of the performance indicator.
Flowcharted the process. (See Appendix C).
Tested key controls related to manual or basic computerized processes (e.g., spreadsheets, databases, etc.).
Conducted and evaluated tests of the automated and manual controls within and surrounding each of the critical applications to determine whether the tested controls were adequate to provide and maintain reliable data to be used when measuring the specific indicator.
Identified attributes, rules, and assumptions for each defined data element or source document.
Recalculated the metric or algorithm of key performance indicators to ensure mathematical accuracy.
For those indicators with results that SSA determined using computerized data, we assessed the completeness and accuracy of that data to determine the data's reliability as it pertains to the objectives of the audit.

As part of this audit, we documented our understanding, as conveyed to us by Agency personnel, of the alignment of the Agency's mission, goals, objectives, processes, and related performance indicators. We analyzed how these processes interacted with related processes within SSA and the existing measurement systems. Our understanding of the Agency's mission, goals, objectives, and processes were used to determine if the performance indicators appear to be valid and appropriate given our understanding of SSA's mission, goals, objectives and processes.

We followed all performance audit standards in accordance with generally accepted government auditing standards.
In addition to the steps above, we specifically performed the following to test the indicator included in this report:

Assessed the sample selection methodology, including the creation of the sample frame.
Recalculated the survey results, including the survey weights, for each of the three types of performance surveys.
Recalculated the combined survey estimate for the SSA customer satisfaction performance.
Tested key controls over the Blaise system used to calculate the Office Visitor Survey.

Appendix C
Flowchart of Customer Satisfaction Surveys

1 A completed call is a call where the customer has selected to speak with an SSA representative or selected an option from the automated menu.
2 An eligible call is one that has been made between 7 a.m. and 7 p.m. during the caller's local time, came from a phone number that made fewer than 100 calls to SSA that day, and was made during the sample period.

Customer Service Survey - 800-Number Caller:

The customer calls Social Security Administration's (SSA) 800-Number.
The Automatic Number Identifier (ANI) system records the customer data.
MCI furnishes SSA with the ANI data.
Office of Statistics and Special Areas (OSSAS) selects the completed calls within the sampling period from the ANI data. A completed call is a call where the customer has selected to speak with a SSA representative or selected an option from the automated menu.
OSSAS selects the eligible calls from the completed calls. An eligible call is one that has been made between 7 a.m. and 7 p.m. during the caller's local time, came from a phone number that made fewer than 100 calls to SSA that day, and was made during the sample period.
OSSAS selects a random sample of callers to participate in the survey from the list of eligible calls.
OSSAS sends an electronic file with the selected customers' information to the contractor.
The contractor administers the survey.
The contractor compiles survey responses and sends them electronically to OSSAS.
OSSAS applies survey weights to the sample data and calculates the final survey result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
OSSAS analyzes the survey results for the Government Performance and Results Act (GPRA) performance indicator.
OSSAS combines the results from surveys and weights them by the customer universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to Office of Strategic Management (OSM).
OSM publishes the GPRA results.

Customer Service Survey - FO Caller:

OSSAS selects the Field Offices (FO) to participate in the survey.
Office of Telecommunications and Systems Operations (OTSO) arranges for the installation of a Caller ID and other equipment in the selected FOs.
Customers call the FO.
The telephone contractor downloads and extracts customer information from the Caller ID system.
The contractor sends an electronic file of all the callers to OSSAS.
OSSAS extracts the data from the electronic file.
OSSAS selects the eligible FO callers.
OSSAS selects a sample of eligible FO callers to participate in the survey.
OSSAS sends an electronic file with the selected customers' information to the contractor.
The contractor administers the survey.
The contractor compiles survey responses and sends them electronically to OSSAS.
OSSAS applies survey weights to the sample data and calculates the final survey result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
OSSAS analyzes the survey results for the GPRA performance indicator.
OSSAS combines the results from the surveys and weights them by the customer universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to OSM.
OSM publishes the GPRA results.

Customer Service Survey - Office Visitor:

OSSAS selects a random sample of 52 FOs and 13 Hearing Offices (HO) to participate in the survey.
OSSAS notifies the FO and HO of their selection to participate in the survey.
The customer visits the FO or HO.
The FO or HO enters the customer's information into the Access data base or other tracking system when the customer checks in at the reception desk.
The FO or HO sends the electronic list of customers and their information to OSSAS.
OSSAS selects a random sample of customers to participate in the mailed survey.
OSSAS electronically sends the names and addresses of selected customers to the contractor.
The contractor administers the survey via mail.
The customer returns the survey to OSSAS after completion.
OSSAS enters the survey responses into Blaise.
OSSAS reviews the information entered into Blaise for completion.
OSSAS applies survey weights to the sample data and calculates the final survey result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
OSSAS analyzes the survey results for the GPRA performance indicator.
OSSAS combines the results from the surveys and weights them by the customer universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to OSM.
OSM publishes the GPRA results.

Appendix D
Statistical Methodology

Methodology for the Sample Selection of Survey Participants

800 Number Caller Survey

In FY 2004, the Office of Quality Assurance and Performance Assessment (OQA) randomly selected a sample of 2,600 phone numbers of customers who called SSA's 800 number to participate in the 800-Number Caller Survey. The sample was selected on a biweekly basis from a population of all phone numbers of callers that reached the 800 number during the month of March. As the biweekly samples were drawn, OQA excluded phone numbers sampled previously during the month, phone numbers that reached the 800 number 100 times or more within a single day and calls made between 7pm and 7am local time. Calls from blocked numbers, businesses, pay phones, or other locations where the customer is not identifiable were excluded during the interview process. Service to SSA's 800 Number callers is provided by agents in teleservice centers and SPIKE units in SSA's program service centers. The Field Offices (FO) and Hearing Offices (HO) do not provide customer service to their customers via the 800-number, therefore OQA does not select FOs or HOs to participate in this survey.

Field Office Caller Survey

Each year, OQA selects a sample of 110 offices to participate in its FO Telephone Service Evaluation, a review in which calls are monitored from remote locations to ensure accuracy. OQA selects the sample without replacement from the current population of eligible FOs. Eligible FOs are those that have not been selected in previous years.

From this initial sample of 110 FOs, OQA selects a sub-sample of offices to participate in the FO Caller Survey. The sub-sample is a systematic sample from the parent sample, after sorting the parent sample by telephone system, region, and area. Thus, the sample of offices included for the FO Caller Survey has a distribution of telephone system type, region, and area similar to the parent FO Telephone Service Evaluation sample. In Fiscal Year (FY) 2004, the sub-sample of offices selected for the FO Caller Survey consisted of 75 of the 110 offices from the FO Telephone Service Evaluation. Because of phone system limitations, not all sub-sampled offices can be equipped to enable identification of phone numbers for survey sample selection purposes; in FY 2004, 52 of the 75 offices initially sub-sampled could be equipped as necessary. From the population of phone numbers identified in the sub-sampled offices, during the month of April, on a biweekly basis, OQA selected a sample of 2,600 phone numbers of field office callers to participate in the survey. OQA excluded from the sample all phone numbers that appear to be invalid and phone numbers sampled in the previous biweekly selections for the FY 2004 survey. In addition, during the interview process, calls from blocked numbers, businesses, pay phones, or other locations where the customer is not identifiable were excluded as were personal calls to field office employees.

The FO Caller Survey sample is drawn from all types of incoming lines an office has available for public use; although all FOs handle the same types of business over the phone, depending on the size of the office, some have one type of line and some have two types available to receive incoming calls. The sample selection process takes into account whether a particular office has one or two types of public lines available to ensure that all incoming phone numbers have an opportunity for selection. An approximately equal number of calls is sampled from each FO.

Phone numbers of callers from field offices that were not part of the sub-sample of offices equipped for caller identification did not have an opportunity for selection.

Office Visit Survey

The Office Visitor Survey is conducted at 52 FOs and 13 HOs once each year. For each survey execution, the offices are selected without replacement from the current list of eligible FOs or HOs. Eligible offices are those that have not been selected in previous years.

While HOs are selected as a simple random sample, OQA selects FOs in a stratified random sample by region. The number of FOs from each region is proportional to the number of FOs in that region. The distribution of FOs sampled by region is:

Region Name
FOs Sampled
Boston 3
New York 5
Philadelphia 6
Atlanta 10
Chicago 9
Dallas 6
Kansas City 3
Denver 2
San Francisco 6
Seattle 2

Each sampled office participates for 1 week of the 8-week survey period, which extends from July to September. During their designated week, every day sampled offices submit identifying information to OQA for each individual who visited. OQA selects a random sample of 5,000 customers at the rate of 130 per day from the reported population to participate in the Office Visitor Survey. Customer records without a valid address and visitors already sampled for the current survey in a previous daily selection were excluded from the sample selection. Office visitors of offices that were not part of the initial field office and hearings office sample did not have an opportunity for selection.

Calculation of the Survey Results

When determining the estimated percentage of Excellent (6) responses for overall satisfaction, SSA summed the weights for the Excellent responses and divided by the total weight of all valid responses.

where i is the value of the response (ranging from 1-6)
j is the number of the respondents and
Wij is the weight of the jth person whose response was i

When determining the estimated percentage of Excellent/Very Good/Good (6/5/4) responses for overall satisfaction, SSA summed the weights for the Excellent/Very Good/Good responses and divided by the total weight of all valid responses.

where i is the value of the response (ranging from 1-6)
j is the number of the respondents and
Wij is the weight of the jth person whose response was i

Calculation of the Combined Overall Survey Result

SSA combined the results from the three surveys to derive one overall measure of customer satisfaction. The percentage of Excellent responses for a survey (or Excellent/Very Good/Good) was multiplied by its projected universe for each of the universes: 800 Number Caller, FO Caller, and Office Visitors (FO Visitor and HO Visitor). These numbers were then added together and divided by the total universe.

where k represents either 800 Number Caller, FO Caller, or Office Visitors (FO Visitor and HO Visitor).
Rk is the percentage of Excellent (or E/VG/G) responses in the kth universe
Uk is the population total for the kth universe

Appendix E
Agency Comments

SOCIAL SECURITY

MEMORANDUM

Date: September 9, 2005

To: Patrick P. O'Carroll, Jr.
Inspector General

From: Larry W. Dye
Chief of Staff

Subject: Office of the Inspector General (OIG) Draft Report, "Performance Indicator Audit: Overall Service Rating" (A-15-05-15118)--INFORMATION

We appreciate OIG's efforts in conducting this review. Our comments on the draft report's recommendations are attached.

Please let me know if you have any questions. Staff inquiries may be directed to
Candace Skurnik, Director, Audit Management and Liaison Staff, at extension 54636.

SSA Response

COMMENTS ON THE OFFICE OF THE INSPECTOR GENERAL'S (OIG) DRAFT REPORT, "PERFORMANCE INDICATOR AUDIT: OVERALL SERVICE RATING"
(A-15-05-15118)

Thank you for the opportunity to review and provide comments on this draft report.

Recommendation 1

Improve documentation of policies and procedures. The documentation should be complete and accurate to define all of the steps ultimately used to report performance results in the Performance and Accountability Report (PAR). Estimates and calculations used to generate the survey results should be clearly documented and supported to readily enable an independent assessment. SSA management should annually update and take ownership of the documentation in compliance with OMB A-123 guidance.

Comment

We agree. Although documentation of the many processes involved in conducting each of the surveys is available, we agree that it could be consolidated and presented in a more uniform manner. We will review and compile the relevant material as workload demands permit. We have set September 2006 as the target date for completing this task.

Recommendation 2

Periodically assess the impact of performing the surveys over limited time periods, and consider performing abbreviated surveys on smaller samples at other times during the year.

Comment

We disagree. Although we understand the intent of the recommendation, we have previously considered the feasibility of conducting the surveys more frequently and have found that the benefit of doing so is not sufficient to justify the time and expense it would take to administer the surveys. The 800-Number Caller Survey, for instance, began as a quarterly survey in the late 1980's and later became a semi-annual survey because fluctuations in results were too minimal to support the resource expenditure. More recently, effective with fiscal year (FY) 2003, because of the consistency of results over time and increasing resource constraints, the Agency decided to reduce the frequency even further to once per year.

Recommendation 3

Consider the effect of nonresponse, through statistical tests between respondents and nonrespondents for characteristics believed to be correlated with customer satisfaction. We recognize that SSA has found it difficult to identify a comparable population to determine the effect of nonresponse, however, efforts should continue to be made to examine the characteristics of the respondents and representation of the population since segments of the population lacking coverage may be a result of nonresponse. Model-based techniques using information gathered on respondents may be helpful to assess the impact of nonresponse.

Comment

We agree that a nonresponder analysis would be an important and valuable undertaking to identify potential bias. However, for the performance measure surveys, there is no meaningful data available regarding characteristics of the sample population that could be correlated to customer satisfaction. For example, the only information about the sample population in the caller surveys is the fact that someone from the identified telephone number called SSA; even if additional characteristics were collected on responders during the survey, they would shed no light on nonresponders since comparable information about the sample is not available. Techniques that would rely on an external data base for identification of population characteristics would have to represent a population comparable to SSA callers; to our knowledge, such a data base does not exist. Also note that subsampling nonresponders for additional follow-up contacts, another technique sometimes employed, would be unlikely to yield results since the Agency's contractor already makes 15 attempts over the course of 3 weeks to reach nonresponders.

Recommendation 4

To minimize measurement errors, design each survey to be as similar as possible.

Comment

We agree. Our intent has always been to design the surveys to be as similar as possible, but we also need to reflect the particulars of the individual's experience that may vary depending on the mode of contact. However, we have noted a couple of inconsistencies in the order of some of the questions that we plan to remedy for the next round of surveys in FY 2006.

Recommendation 5

Implement controls to ensure the accuracy of the PAR's data sources disclosed to support the performance results.

Comment

We agree. The data source in the FY 2005 Annual Performance Plan was changed to reflect the correct information. As a result, the FY 2005 PAR will disclose the proper data source.

Recommendation 6

Provide a direct linkage of the performance indicator to the Agency's strategic goals and objectives. SSA should expressly comply with criteria established by GPRA and OMB PART guidance, which requires performance indicators be linked to the Agency's relevant strategic goal and objective. If a direct linkage can not be supported, the Agency should disclose the basis for selecting the indicator and why it diverts from established criteria. SSA should further consider indicators which include satisfaction from internet users and other technology investments that support the strategic objective.

Comment

We agree in part. Regarding OMB PART guidance, we believe this reference should be removed, since the performance indicator was not a PART measure. Regarding the suggestion that we provide a direct linkage of the performance indicator to the Agency's strategic goals and objectives we partially agree. We believe that the linkage is apparent but that it could be enhanced. During the development of the new Agency Strategic Plan for FY 2006 through FY 2011, the language for the "technology" strategic objective has been expanded so that there will be a clear linkage between the performance measures that relate to technology and the strategic objective. In the interim we are expanding the language in the performance section of the FY 2005 PAR to make the linkage more apparent.

Recommendation 7

Provide a linkage to supplementary information in the PAR to increase the credibility of the survey results by comparison to other Federal agencies, e.g. the American Consumer Satisfaction Index.
Comment

We agree. A reference to the American Consumer Satisfaction Index (ACSI) is already contained on page 24 of the FY 2004 PAR. To enhance the readability of the document we try to avoid duplicating information that is contained in other sections of the document. As we prepare the FY 2005 PAR we will consider either providing a reference to the appropriate page number in the PAR where the ACSI scores are presented or we will move this information to the
performance section that discusses this particular indicator.

[In addition to the information listed above, SSA also provided a technical comment which has been addressed, where appropriate, in this report.]


Overview of the Office of the Inspector General

The Office of the Inspector General (OIG) is comprised of our Office of Investigations (OI), Office of Audit (OA), Office of the Chief Counsel to the Inspector General (OCCIG), and Office of Executive Operations (OEO). To ensure compliance with policies and procedures, internal controls, and professional standards, we also have a comprehensive Professional Responsibility and Quality Assurance program.

Office of Audit

OA conducts and/or supervises financial and performance audits of the Social Security Administration's (SSA) programs and operations and makes recommendations to ensure program objectives are achieved effectively and efficiently. Financial audits assess whether SSA's financial statements fairly present SSA's financial position, results of operations, and cash flow. Performance audits review the economy, efficiency, and effectiveness of SSA's programs and operations. OA also conducts short-term management and program evaluations and projects on issues of concern to SSA, Congress, and the general public.

Office of Investigations

OI conducts and coordinates investigative activity related to fraud, waste, abuse, and mismanagement in SSA programs and operations. This includes wrongdoing by applicants, beneficiaries, contractors, third parties, or SSA employees performing their official duties. This office serves as OIG liaison to the Department of Justice on all matters relating to the investigations of SSA programs and personnel. OI also conducts joint investigations with other Federal, State, and local law enforcement agencies.

Office of the Chief Counsel to the Inspector General

OCCIG provides independent legal advice and counsel to the IG on various matters, including statutes, regulations, legislation, and policy directives. OCCIG also advises the IG on investigative procedures and techniques, as well as on legal implications and conclusions to be drawn from audit and investigative material. Finally, OCCIG administers the Civil Monetary Penalty program.

Office of Executive Operations

OEO supports OIG by providing information resource management and systems security. OEO also coordinates OIG's budget, procurement, telecommunications, facilities, and human resources. In addition, OEO is the focal point for OIG's strategic planning function and the development and implementation of performance measures required by the Government Performance and Results Act of 1993.