A Philip Randolph Square is a park located in Central Harlem South. According to long-time residents in this neighborhood, the park has not always been considered the safest place. While that has changed since its renaming in 1964, the reputation it once had still lingers. The A Philip Randolph Neighborhood Alliance and the Friends of A Philip Randolph Square are working with the community members and stakeholders to develop plans to revitalize this park into a community space to honor the legacy of this labor unionist and civil rights activist.
This collaborative process is important in figuring out community priorities and desires to ensure this space optimally serves its community. Making sense of the survey results collected is crucial to inform future designs and the Neighborhood Alliance for Open Spaces (NAOS) has already done their initial analysis to start the process. They shared their analysis and preliminary programming diagrams with Professor Sula. However, after reviewing the materials in class, we were left with many questions about the various visualizations. In taking on this dataset for lab 2, my goal was not only to improve on clarity but also to explore ways to more effectively compare preferences with each other.
I’ve decided to expand on this study for the final project by running a demographic analysis from the survey results to see if and what patterns emerge in the revitalization preferences. In re-examining this data in different demographic “cross-sections”, the intent is to tell the story of the people that make up the community and their opinions on the revitalization of the park to the community members themselves.
I plan on using Tableau to run preliminary analysis on demographic data of the survey responses to develop 3-4 personas to help tell the story of programming preferences. From there I can create groups within Tableau based on general characteristics of the personas and filter all the programming results based on those groupings. My final deliverable will be the 3-4 personas that will filter each of the 3 dashboards that I created in lab 2 (physical improvement interests, programming interests, and site element preferences).
1. UX Research to Define Scope: Interview – Gregory Baggett
Professor Sula and I coordinated a meeting with Gregory Baggett of the Neighborhood Alliance for Open Spaces to review the work I completed for lab 2 and to get a better understanding for their goals to develop a scope for this phase of the project. Gregory expressed the goal of creating a community visioning project as a way to give this data back to the community. He expressed interest in producing a report, not necessarily in print, but in a more accessible format like a webpage. He was also interested in a static version of potential visualization that could be displayed along the fences in the park as way to show the community the results.
We believed developing several personas based on the demographic data collected from the survey results would be the most effective way to produce these demographic “cross-sections” that Gregory was looking for. Personas, while fictional, are created as a theorized vision of the target-user to help understand “users’ needs, experiences, behaviors and goals”. This user-centered design method is usually done at the start of a project to guide development, but here we are taking a data-driven approach to personas by basing them off of the data collected on actual community members. After developing 3-4 fictional personas based on actual demographic information of community members, we planned to use the characteristics of each person to filter the visualizations I created in lab 2. We pitched this idea to Gregory and he was interested in this not only as a way to help tell the story of who has what preferences, but also as a more empathetic and dynamic way to give that data back to the community.
2. Developing the Personas [Gephi]
We took two separate approaches to identifying clusters within the dataset, one in Gephi and one in Tableau. We decided to use the network analysis that Professor Sula produced because I felt unclear on how and what Tableau was determining its clusters through the “Analysis” pane. Figure 2 shows the results of the network analysis from the characteristics of each person who was surveyed.
Each circle represents a characteristic of an individual (e.g. sex identification, income range, education level, frequency with which they visit the park, etc.), and the circles and labels are scaled to the number of occurrences of that characteristics. Within this network, several observations are clear:
- This network is quite dense, which was expected, but for analytical purposes, this means it is more challenging to develop completely distinct personas because there is significant overlap between characteristics.
- (5) clusters have emerged, as coded by color, which are the basis for the personas we developed. They are loosely described as follows:
- (1) Black/African-American Male and (1) Black/African-American Female who have been long time residents, visit the park daily, with at least a high school degree completed, and live in city-owned/public housing
- Hispanic/Latinx Female who works or owns a business in the neighborhood and visits the park several times a week
- White/European Male who moved to a rental property in the neighborhood in the last decade, visits the park several times a month, with at least a college degree, and has children
- Asian woman who works in the neighborhood and visits the park several times a week for recreation
3. Defining the Personas [Tableau]
While these clusters give a fair amount of detail, I was able to verify and extract more specific details like income, education level, employment status for these personas as necessary. It was important to keep in mind that the personas did not get too specific and discount too many people that were surveyed, as our goal is to maximize the amount of people that these personas capture and represent. I took the rough personas we gathered from Gephi and defined them further by visualizing different demographic characteristics (e.g. sex identification, income range, education level, frequency with which they visit the park, etc.) within each persona to see what occurred the most often.
4. Assigning Groups in Google Sheets
The dataset used for this lab was the raw survey results provided to us by Professor Sula in his collaboration with the NAOS (figure 9). The data was compiled in a Google Sheet and then directly linked to Tableau Public in order to create visualizations. I added a “Group” column and manually assigned each survey response to one of the persona groups that we developed based on which they fit the best. Those that did not fit any of the persons were assigned to a 6th group.
5. Filtering by Group in Tableau
I created a new worksheet to set up these new groups linked to a photo/icon to represent these personas. Once this worksheet is brought into the “preferences” dashboard, I will set this worksheet to filter the preferences results by clicking on the photo that corresponds to the persona. Then below each persona is a brief description of the person, and several demographic characteristics they fit based on my findings from “Defining the Personas”. I also bolded general implications under “behavioral Characteristics” to highlight how their attitudes differ among the dashboards.
6. Treemap of Personas
I also thought it was helpful to put together a treemap of the personas that would remain constant on throughout the “preferences” dashboards to show what percentage of the people surveyed are captured by that persona. Each persona is assigned a distinct color and set to calculate and label “percent of total”, but will also display the raw count upon hover. However, it is important to note that these personas don’t represent the entire surveyed population, as 36.73% of people surveyed do not fit into these personas, and some did not provide enough information to be considered for sorting into these personas.
7. UX Research Part 2: Interviews for Feedback
I conducted two additional UX research interviews to get feedback on clarity and communication of information from my visualizations. One participant was Gareth Mahon, partner at RKLA Studio, a landscape architecture firm in New York City. In addition to providing valuable insights into my research questions, he offered suggestions from the perspective of someone who designs public and private built environments. My other participant was Taylor McGinnis of the Pratt Center for Community Development, who is a Democratizing Data Researcher. Taylor put Gregory in contact with professor Sula. She offered insightful feedback from the perspective of collecting and distributing data for use in planning.
I started by giving them a brief presentation on the background of the project and my process. I asked each participant a series of questions:
– What is your overall impression of the visualizations?
– Do you feel the information is clear? Why or why not?
– Are there any parts of the visualization that are not intuitive?
– What conclusions are you able to draw from these visualizations?
– Do you see any room for improvement?
– Do you have any additional feedback or comments?
The intention was to have more of a conversation for critique, versus a task-oriented usability study. I also took feedback from our class presentations into consideration.
Setting Axis & Selecting Visualization [Preferences Visualizations]
After testing out several visualizations, for interests in physical improvements and programming, I settled on a highlighted table displaying raw counts, which clearly maintained data integrity and was further emphasized by the sequential color scale (figure 4). The color ramp seemed to be an intuitive way to display where more counts for low or high interest occurred at a quick glance because it follows a general coding convention where more or more frequent is darker. This allows the user to get an idea of the data at a quick glance, but after studying the visualization for a little the user will notice the raw counts within the color table. For the site elements preferences, I selected a bar graph.
Adjusting Appearance – Labels & Color Palette [Preferences Visualizations]
Colors and fonts were selected based on the style guide provided by NAOS. For the series of bar charts, I set the bars horizontal since the category labels were lengthy, even after condensing the language, and sorted them descending by raw count. I also chose to label the raw count and therefore turned off the raw count scale along the x-axis because it was superfluous information, non-data ink. The bars allow us to visualize the preference in relation to one another within a specific category, and the raw counts add a clear reference metric to quantify the bars.
Compiling the Preferences Tables & Bar Charts
The individual visualizations for each category were then combined into their own dashboards. This allows viewers to compare, for example, physical improvement interests across each option all in one page. I set the visualizations for physical improvement interests and programming interests to “fit to width” on the page to make the color blocks longer, which allows more breathing room between the raw count labels in each box compared to figure 11. For the site element preferences, I aligned the starting point for all of the bars so that the labels were all readable on one line, which also allows for a cleaner visual comparison across the whole dashboard. On all three dashboards, I added an overall title, with a brief description of what the data is showing and what the ranking scale is.
Adjustments to the Overall Dashboard Based on UX Research
Reformatting Persona Names and Descriptors: Initially I gave last names to these personas, which made them feel a little too real and put their descriptors in quotation marks. In order to make them seem more like an idea of a person I dropped the last names, put the first name in quotation marks, and took the quotation marks around the descriptors away since they do a better job of broadly describing the group they represent.
Adding a Caption to the Personas: Some great feedback I received was that it wasn’t necessarily intuitive that users could click on the personas to filter the preferences on the left side of the dashboard. I added a caption next to the title that reads “click on the image to filter interest results by persona”
Adding a Caption to the Treemap: Similarly, even though users felt it was important to have the treemap, it wasn’t intuitive what it meant without an explanation. Based on this, I added a caption next to the title that reads “shows percentage of survey respondents that is represented by a given persona”
Adding a Legend to the Preferences Tables: When I first produced these tables in lab 2, it felt redundant to include a color scale legend because it was pretty intuitive that the darker colors means more counts in that surveyed interest. However, both users in my UX research did not initially understand the color coding until there was some explanation. Reflecting on this, I can see that the additional elements make the visualization as a whole more complex and adding a color scale to indicate what the colors mean is important for the visualization to stand on its own, especially to an audience that isn’t necessarily used to reading data visualizations.
Figure 6 shows a still image of the one of the visualizations generated and can be viewed in Tableau from the button below. The following are some interpretations for the visualizations:
Overall Preference Findings
Physical Improvement Interests
Based on the saturation of the color, we can clearly see that more people had a high interest in chess & card tables, concession stands, flower gardens & trees, playground equipment, and a tourist information kiosk. More people had little interest in a dog run. Interestingly, almost the same amount of people had little and high interest in urban farming, which would be something to investigate further within the community.
Based on the saturation of the color, we can clearly see that more people had a high interest in all proposed programming options. The only potential conflict is the interest in a flea market since more people had less interest in it at a rating scale of 1 versus 3 or 4. We can also see that more people rated interest at 3 or more across all options.
Site Element Preferences
Based on the length of the bars, we can see that most people preferred the following: less paving & more greenery for surfaces, more benches, no preference in fence or fence style, no gates, and no preference in adding or removing parking spaces. We can also see that more people had stronger feelings about surface and bench preferences than fencing, gate, and parking preferences based on how close in length the bars are for these options.
Preferences Based on Personas
Physical Improvement Interests
Flower Gardens & Trees, Playground Equipment, and Chess & Card Tables have clear majority preference, and Dog Run is clearly disfavored. Urban Farming, Concession Stand, and Tourist Information Kiosk all have conflicting preference. I thought long-time residents would be less interested in a Tourist Information Kiosk because of their familiarity with the area, which turned out to be true except for the “Maya” persona, which accounts for the largest percentage of the surveyed results. Because of her interest in a Tourist Information Kiosk, the overall preferences greatly skews in her favor.
As mentioned above, there is a majority preference for all programming preferences, and all of the personas’ preferences aligned with this, except “John”. He was roughly evenly split between high and low interest in Flea Markets and Public Meeting/Rallies.
Site Element Preferences
Again, all of the personas aligned with the overall findings of the site element preferences, and actually also ranked in the same order as well.
Findings [UX Research]
Consider the User: Conducting UX research to get feedback on visualizations was an important reminder that what may be intuitive to one user may not be intuitive to another. Especially as someone who has been working so closely to develop these visualizations, it was great to get other eyes on it to point those things out. Both of my users have experience looking at data in different fields, but don’t have experience producing data visualizations. This made me reconsider how information was displayed, which descriptions needed to be explicit, and to indicate and prompt which features are interactive. Since these visualizations are meant to be shown to community members who probably have even less experience looking at data visualizations, it made me more conscious about how information is displayed.
More Information vs Too Redundant: To that point, since these visualizations are meant to be shown to community members without someone to explain how they work or the background behind them, it’s also important that these visualizations can stand on their own. Adding those extra descriptors and legends were an important addition to my initial visualizations that I think strengthen it overall.
As a tool, filtering by persona is enormously helpful to zoom in on a user group, and then zoom back out at the overall picture. Since personas are fictional people, it was a good exercise fabricating a life for these personas based on actual demographic data from the survey. This is helpful when trying to imagine why certain personas tended to vote a certain way on the survey. However, it’s important to keep in mind that while conclusions are based on a data-driven approach to personas, the nature of clustering of data to locate patterns means that some nuance is lost. More information should be collected to find out specific details. Additionally, since I manually sorted each survey response into a group that it fit best it mean that sorting was up to my discretion and therefore subject to my own biases. If this study is replicated in the future, I would recommend having several people who are part of this sorting process for a hopefully more objective result.
Recommendations for Improvements Based on Findings
I think something worth trying to figure out based on my findings and keeping information clear would be to figure out a way to keep the highlighted table bars constantly on a scale from 1 to 5. When filtering by persona, certain table cells disappear because there are no counts within them. While this animation does give good feedback that the tables are being filtered by the personas, I think there is room for misinterpretation here if users are not careful. Some data integrity that is lost because user are forced to reference the number scale above the table cell to see what the response was, versus when the tables are all unfiltered and users can see there are five cells. Visually, users can determine the number scale by seeing its position in relation to the other cells. Figuring out how to make that consistent might be challenging since a lot of it is done within Tableau as a program and would probably require some significant overriding.