In Part 1 I talked about my experience using quantitative research methods at the MBTA. Over the past almost a year I have thought (and read) more about qualitative research and data (and the difference between the two).
For transit agencies to really consider and use community voices and lived experience as data they will need to institutionalize qualitative data and research methods. This will require different data infrastructure, data collection, and analysis skills.
In general transit agencies gather qualitative data for a particular project, plan, or policy decision as one-off efforts. Each effort sometimes regathers the same input from the community as previous ones. This can be a burden on both the community and the agency’s time. Part of the problem is that transit agencies (transportation more widely) don’t have a qualitative data infrastructure.
As agencies started getting more automated data from technology systems they developed data infrastructure to clean and store data. They hired IT staff to build and maintain data warehouses and data scientists to analyze the data. They built dashboards that visualize the data. Part of what this data infrastructure allows is for multiple teams to use the same data to answer different questions. For example, ridership data is stored in one place and many departments (and agencies and organizations) can access it for their analyses. The data infrastructure also provides transparency with open data.
In my experience transportation agencies don’t have the same infrastructure for qualitative data. There isn’t a centralized location and a storage system so multiple teams can find out what riders on a certain route or neighborhood are saying. The data from the customer call center remains siloed in that system. The input for a specific project stays with that project team. Data from the transit agency isn’t shared with the MPO or City. Or the public in a standardized way.
At the MBTA we did create an infrastructure for survey data. My team wrote an internal survey policy to standardize practices and data sharing. Part of this effort was standardizing basic questions so we could gather comparable results across surveys and time. This ‘question bank’ also allowed us to save time and money by getting all of our standard questions translated into the six languages the MBTA uses (based on its Language Access Plan) once. We attempted to create a single repository for survey data. (Most of the work to do this was organizational, not technical.)
Creating this type of qualitative data infrastructure requires thinking through data formats, how to code qualitative data (by location, type of input, topics, etc), and how to share and tell the stories of the data. And whoever is doing that has to have authority to impact all of the ways the agency gets community input. So importantly creating this infrastructure is also about where community engagement lives in an agency and how it is funded.
(An advocate I spoke to recently suggested that maybe this consolidation of qualitative data shouldn’t even live at a transit agency. That it should cut across all transportation modes and live at the MPO or other regional body.)
To move beyond using qualitative data in quantitative analysis, agencies need people with research and data collection skills not always found in a transit agency. Lots of agencies have started hiring data scientists to analyze their ‘big’ quantitative datasets. Agencies also need sociologists, ethnographers and other research skillsets grounded in community. These researchers can design community data collection efforts that go beyond public comments in a public meeting.
Qualitative research also asks different questions. Instead of using data to explain who is in the tail of a quantitative distribution, qualitative research asks questions like why is the distribution like this in the first place and how do we change it. Qualitative research is able to bring in historical context of structural racism, explore the impact of intersectional identities, and allows power dynamics to part of the analysis. Check out more from The Untokening on the types of questions that need to be asked.
Transit agencies often aren’t asking the types of research questions that qualitative research answers. Not just because they don’t have the staff skillsets, but because these questions don’t silo people’s experiences with transit from their experiences and identities overall. Lots of agencies try to stay in their silos, so they aren’t forced to address the larger structural inequities. It is easier to focus on decisions you think are in your control. For example, quantitative Title VI equity analyses are confined to only decisions made by that agency in that moment in time. But equity is cumulative and no one is just a transit rider.
Valuing community voices as essential data means agencies will need to invest in data collection, storage, analysis, and visualization or story-telling for qualitative data in a manner similar to quantitative data. The executive dashboards and open data websites will need to incorporate both types of analysis and data. More fundamentally it means that agencies will have to breakdown their silo and take an active role in fixing the larger structural inequalities that impact the lives of their riders everyday.
This series is about the data used to make decisions; clearly who is making the decisions is also important to valuing community voices. I chose to write this series in a way that shows how my thinking about data has evolved over time and will no doubt continue to do so. In fact it evolved in the act of writing this! For insightful dialogue about revision in writing and life, check out this podcast between Kiese Laymon and Tressie McMillan Cottom.
One of The Untokening Principles for Mobility Justice is to “value community voices as essential data.” I have been thinking about how transit agencies can put this into practice.
This is a three-part series that shows my thinking about data over time. The prequel is the post I wrote on data back in 2017 that mostly focused on how messy quantitative data analysis is. In Part One I discuss my experience in a transit agency mixing quantitative and qualitative data for analysis using quantitative research methods. Part Two is my thinking now on the importance of qualitative research methods and what transit agencies need to do to put qualitative data on equal footing with quantitative data. (Note: I have found a distinction between qualitative data and qualitative research methods useful as my thinking has evolved.)
Quantitative transit data often comes from technology systems (e.g. automated passenger counters or fare collection systems) or survey datasets (e.g the US Census or passenger surveys). In both cases collecting quality data requires investment. The benefits of technology systems are datasets that contain almost all events (a population, not a sample) and the ability to automate some analysis. However, transit agencies can’t rely on technology systems alone, because there is so much information, quantitative and qualitative, that these systems can’t measure.
As a generalization, qualitative data is information that is hard to turn into a number. For quantitative transit analysis, it is needed to answer questions about how people experience transit, why they are traveling, trips they didn’t make, and how they make travel decisions. Qualitative data can come from surveys, public comments at meetings, customer calls, focus groups, street teams, and other ways that agencies hear from the public directly.
In the data team at the MBTA we knew we needed both quantitative and qualitative data, usually mixed together iteratively depending on the type of decision. As an oversimplification, we used data to measure performance, find problems, and to identify/evaluate solutions.
Before you measure performance, you have to decide what you value (what is worth measuring) and how you define what is good performance. Values can’t come from technology and should come from the community. At the MBTA the guiding document is the Service Delivery Policy. In our process to revise this policy, we used community feedback in the form of deep-dive advisory group conversations, a survey, and community workshops. Once we agreed on values, knowing what data we had to measure those values, we needed input to try to make the thresholds match people’s experiences.
For example, we valued reliability so wanted to measure that in order to track improvements and be transparent to riders. This brings us to the question of how late is late? Our bus operations team stressed that they need a time window to aim for due to the variability on the streets. From passengers we need to know their experiences like: is early different than late, do they experience late differently for buses that come frequently vs infrequently, and how they plan for variability in their trips. Then we worked with the data teams to figure out how to build measures using the automated vehicle tracking data to report reliability and posted it publicly every day.
Identifying problems can come from both community input and data systems. Some problems can only be identified through hearing from passengers. No automated system measures how different riders experience safety onboard transit or tells transit agencies where people want to travel but can’t because there is no service or can’t afford it. In some cases, automated data is far more efficient in flagging issues and measuring the scope and scale of problems. For example, we used automated systems to calculate passenger crowding across the bus network and where it is located in time and space.
The MBTA used quantitative data to identify a problem of long dwell times when people add cash to the farebox on buses. The agency decided on a solution of moving cash payment off-board at either fare vending machines (FVMs) or retail outlets. (I will admit more qualitative analysis should have been done before the decision was made.) It was critical to understand how this decision would impact the passengers who take the 8% of trips paid in cash onboard. We used quantitative data on where cash is used to target outreach at bus stops. We did focus groups at community locations. Talking to seniors we found that safety was a key consideration between using a bus stop FVM or retail location. This is the type of information we could have never gotten from data systems or survey that didn’t ask the right questions. The team used the feedback to shape the quantitative process for identifying locations.
A key question is at what points in a quantitative analysis process can agencies rely on quantitative data and when is qualitative data imperative. As a generalization, quantitative research methods aggregate data and people’s experiences. We aggregate to geographic units (e.g. census blockgroups) and to demographic groups. We look at the distribution of data and report out the mean or some percentile. Quantitative data analysts need to look at (and share) the disaggregate data by demographics/geography before assuming the aggregated data tells the complete story. And ask themselves, when do we need more data to understand the experience in the tail of the distribution and when is the aggregated experience enough for making a decision.
The question of the bus being late and the use of cash onboard illustrate this difference. Once we set the definition of reliability, service planners use quantitative data to schedule buses. Looking at a distribution of time it takes a bus to run a route, you know there is going to be a long tail (e.g. long trips caused by an incident or traffic). Even though the bus will be late some percent of the time, it is an efficient use of resources to plan for a percentile of the distribution. Talking to the people who experience the late trips would be useful, but likely wouldn’t change that service is planned knowing some trips will be late. (Ideally riders, transit agencies, and cities work together to reduce the causes of late trips!)
However, on the question of cash usage, looking at the payment data you can’t ignore the 8% of trips paid in cash. The experience of that small group of riders is critical. Likely riders paying in cash rely on transit, experience insecurity in their lives, and a decision to remove cash onboard is a matter of access. Without talking to riders, we have no data on why they pay in cash, what alternative methods to add cash would work best for them, and the impact of having to pay off-board.
In my current thinking, at a minimum, decisions that impact the ability of even a small number of people to access transit or feel safe require a higher threshold of analysis. Agencies shouldn’t rely solely on aggregate quantitative data and need qualitative data on the impacts. The role of transit (and government in general) is to serve everyone, including, and often especially, people whose experience fall in the tail of a distribution. (A very quantitative analysis view of the world, I know.)
The lived experience of the community is critical to transit agency decision-making. There are many types of data that can’t come from automated systems. In my experience transit agencies should mix qualitative data into quantitative data analysis, often iteratively as the data inform each other. In practice this means that the teams doing quantitative analysis and community engagement need to be working in tandem with the flexibility to adjust as new data changes the course of the analysis.
(Written May 2017)
Data driven decision-making is the buzz phrase in government and with all the ‘big data’ available it has great potential to improve outcomes. But I don’t think most people have a good idea about what it means in practice or how to understand its limits.
The irony is that I studied pure math in college because I wanted absolutes and to find a kind of truth. I loved proofs, because there is an indisputable answer. And now I spend my days wading around in data, which is all some amounts of wrong, and trying to figure out how to best use it to guide decisions.
Here are a few overly simplistic conclusions I have drawn from my team’s work over the past two years.
Data isn’t clean
There is a tendency to think numbers are true, but in reality they are estimates. The challenge is figuring out how close of estimates and how to make them closer. Often the data we have isn’t being generated to measure the thing we want to measure; it is a byproduct of a different process we are trying to recycle.
We constantly have what we call ‘data mysteries.’ This is when common sense makes us think the data is missing or wrong, and we have to figure out the cause of the problem. Maybe there are software problems. Or we miscoded something along the way. Or the data just isn’t being collected in the way we thought.
Context is critical
Making information out of data requires a lot of context. We can’t find the problems with the data without knowing how the data was collected and the directions of possible errors. We need to know external variables that could be causing change.
We need to understand the problem we are trying to solve so we can identity the best datasets and types of analysis to answer that question. This determines the exact variable, the levels of aggregation, the timeframe for the dataset, and any number of other factors.
Lastly, the results require context. We might say that approximately 34% of trips won’t be impacted, but this will require four footnotes to explain the exact conditions under which we think this to be accurate.
Data analysis isn’t linear
It would be nice if the process went something like: define a problem, find clean data, analyze it, propose best solution. But it isn’t a linear process. Sometimes we find data and play around with it to see if it can tell us something interesting. Other times we have a problem and we don’t have data, so we try to collect some or find something that might approximate it. Often there is already a proposed solution and we are trying to check the impacts.
Analysis is a maze (if not, you are probably presupposing the answer). We are constantly making the best guess of which pathway to take, running into dead ends, finding data problems, and when we arrive at an answer it is likely only one of many paths. We can hope that no matter what path we took that the general conclusions would be the same; but, depending on the dataset, it isn’t a guarantee.
The results aren’t static
Just like writing there are drafts, but eventually we declare it good enough. And often there are mistakes. New data shows up which points out a problem with our original dataset. We figure out a new method. We realize we missed something or made a formula error.
We can hope we are at least asymptotically approaching ‘the answer.’ But we need room to get there. Being honest about data requires forgiveness.
Data isn’t everything
Our understanding of problems to be solved shouldn’t be limited by the data we have. And just because we have data on something doesn’t mean there is a problem to be solved with it.
For example, it is easier to estimate people’s wait times on subway platforms than at bus stops. But that doesn’t mean subway wait time reliability is more important than bus. Qualitative data (aka actually talking to people) is critical to finding problems and solutions.
We have all sorts of systems that generate millions of records of data a day. But the existence of data doesn’t eliminate the need to have a conversation about whether/how we should be using it.
Data does have the power to help governments to make better decisions. We can measure impacts of policy decisions. We can disprove conventional wisdom. The results can change (and improve) outcomes. In order to have this impact, decision-makers, members of the public, and journalists all need to be better data consumers.
This means reading the warning labels that come with a dataset to understand the context. It means appreciating all of the complexities and uncertainty in the process. It means allowing the time and space to find mistakes. But most of all, it means being open-minded and not allowing our implicit assumptions to overwhelm our curiosity about what the data can tell us.
Data is not a truth, it is very messy. But acknowledging and appreciating the mess makes the analysis far more likely to be accurate in the end.
The renewed focus on transportation equity should bring a review of the existing federal Title VI regulations. (In transit circles we usually just say Title VI, but we should say Title VI of the Civil Rights Act of 1964 more often to remind ourselves of the work it took to get the law passed and what it represents.)
The recent Transit Center report on Equity in Practice has a section on the limitations of the current FTA Title VI circular for analyzing the equity of transit agency decisions (page 41). In my mind a fundamental shortcoming is that the analysis process looks at the impact of changes, which assumes the status quo is equitable (or at least acceptable). Another problem is that the equity of fare and service decisions are analyzed at the level of transit agency, not at the regional level. For regions with multiple transit agencies, each agency is analyzing the impact of their decisions, but the riders experience the impacts overall.
Beyond transit, no entity is required to analyze the distribution of all transportation resources (across all modes based on who is using them) at the regional or state level. Transportation decision-making is fragmented between state DOTs, MPOs, transit agencies, and cities and towns; between modes (transit, highways, local roads); between funding sources (federal, state, and local funds); and between types of money (operating and capital). In theory each decision-maker could analyze their own decisions and say they are equitable by their measures, but the sum total of all the decisions aren’t equitably meeting people’s transportation needs.
I started to realize this problem when I was a transit advocate and graduate student in Atlanta. In 2006 the Atlanta MPO (ARC), Georgia DOT, and GRTA (the state agency running commuter buses) adopted a project selection process that weighted congestion relief at 70 percent. Advocates objected because this would shift even more transportation funds away from transit riders reliant on local transit service (MARTA and county agencies) and pedestrians/cyclists. In November 2008 on behalf of Atlanta Transit Riders’ Union, I filed a Title VI complaint against the Atlanta Regional Commission (the MPO) and GRTA.
The complaint argued that weighting congestion relief would disproportionately benefit high-income and white commuters, even in the selection of transit projects. I filed the complaint with FTA. This was before the 2012 revised FTA Title VI circular. Maybe it should have gone to FHWA too.
We applied pressure on FTA, with some help from Congressman John Lewis’s office, to investigate. They finally asked ARC and GRTA to respond and their response stressed that the weighting was ok because they looked at transit and highway projects separately with different methodologies. The official outcome (14 months after it was filed) was FTA said they would do a compliance review of both agencies.
There were positives. The act of filing and the pressure we created did give MARTA (the transit agency) and the City of Atlanta leverage to get decisions to their benefit at the ARC. When the ARC staff meet with us they did agree with some of our concerns. Eventually the congestion weighting in the project selection was changed.
I bring this up 13 years later because, while my understanding of the complexity of transportation policy has increased, the problem of how to challenge the equity of regional/state transportation decision-making remains. With the Biden Administration’s emphasis on equity, hopefully we can rethink how to aggregate decisions in Title VI analysis. For example, all of the federal transportation money (across all modes) going to a state and region analyzed together based (not on the geographic proximity of minority populations to projects) on the usage of the services.
I am interested in your thoughts on how to solve the Title VI silo problem!
Transformative implementation of the infrastructure and federal budget bills will take a generation of public service to fix the machinery of government.
The theme of my work this year is The How, not the What. There is a lot of great work being done on what transportation policy changes are need to address equity and climate change. But how to make or implement policy changes can be much harder. Harder to do, and to research and learn from as often changes are obscured in political deals and implementation takes place inside complex government mazes.
This is a short video I made for a “poster” presentation at the virtual TRB Conference on Advancing Transportation Equity. I am still looking for examples and other theories of change, so please reach out if you have some to share.
I am going to start with the given that a major source of inequity in transportation is the prioritization in funding and building infrastructure for personal motor vehicles. Equity (and addressing climate change) require a shift in this resource allocation. The power to make these decisions are mostly outside individual transit agencies. However, the question of equity also exists within the allocation of resources for transit (and biking and walking). And transit agencies do have the power to make these decisions.
There are a number of ways to define and measure equity in public transit. One definition is essentially that people (or neighborhoods depending on your dataset) of all demographics (income, race, ethnicity, language, ability, age) have access to service that meets their transportation needs. Since ‘needs’ is hard to measure, most analysis measures sameness (equality). For example, do people living in Black neighborhoods have access to the same number of jobs within 45 minutes as white neighborhoods?
There is a lot of data showing these types of inequities across transit networks. The underlying problem is both discriminatory land use policies and transportation decisions. Transit agencies can and should use these types of metrics and data to reduce and eliminate these inequities. But these inequalities didn’t just happen. They are the result of past (and current) transit agency decisions – big and small.
In order to not repeat past inequitable decisions and to acknowledge the impacts caused by agency decisions, I think transit agencies need to do an accounting of how their system got inequitable. We need ‘active voice’ in transit agency equity plans that takes responsibility for their role in creating the problem.
Inequitable transit access can come from big Capital decisions, like where to invest in rail service, and incrementally as a series of small decisions, like where to put that one additional bus trip. No doubt political pressure by politicians representing white and higher income communities is a major factor in many decisions. But that pressure will continue in the backrooms until forced into the light and acknowledged as inequitable.
If you are with me so far that this is important, my question is how: how should transit agencies go about this accounting of past decisions? Here are few components I am thinking about.
Who should do the accounting? Quite literally what process should agencies take and who should lead and be involved in the process. To build new solutions to long-term problems the answer can’t be the agency hires the usual consultants to lead a study. How can agencies and communities collaborate so the process builds trust?
What is the scope? Some transit agencies in the US go back to private sector control and it would be overwhelming to analyze every decision. (The history of transit injustice goes back to the beginning- here is a timeline I put together for my master’s project on Atlanta.) Every agency and region will need to figure out their scope, but it seems important to pick a variety of decisions and look at how they happened and their impact.
What is the format for presenting the history and acknowledgement of equity impacts? Or what is a platform for ongoing analysis and discussion? One interesting example I found is an LA Metro blog post on one of their rail lines.
How should the outcome be used? How will the results be integrated into policy decision-making? And drive narratives and communications about equity to help push back on the forces of inequity? I have seen inequitable decisions as the result of political bullying, maybe talking about the past can help inoculate against those tactics in the future.
What are the challenges for government agencies admitting past injustices? Or even disclosing that they were wrong about something? Clearly the main challenge is if you admit a past wrong then you should do something about it and that requires shifting power and resources. But I also found a deep fear inside a government agency of admitting any mistakes, even small ones. We need to figure out ways that a governmental body can acknowledge they did something wrong in ways that doesn’t undermine trust in government and instead builds it.
(A side tangent, one of the reasons I started the data blog at the MBTA was to create a forum or platform for talking about data mistakes and errors. Data analysis is difficult and messy and even if there are no mistakes new data comes along that might change the results. But there wasn’t really a way for a matter of fact telling of what happened and why we think the new results are better. My hope is that talking about mistakes makes people more confident in the data analysis and the agency in general.)
I have a lot more questions than answers on this topic. And I don’t think I should be the one to have the answers and I know this idea isn’t new. So I am looking for examples or best practices of transit (or other government) agencies doing this type of accounting of past inequitable decisions. Please share if you have any and I will share what I learn!
Ideally transit agencies should not have to be in the position of directly alleviating income inequality. They should play a supporting role by getting people to opportunities; federal and state policies like taxes, minimum wage and benefits laws, and guaranteed income programs should be addressing the outrageous inequality and poverty in the US.
If poverty was addressed in other areas of public policy, then transportation policy could focus on using pricing to shift behavior to address congestion and emissions. The US should have high quality transit at a low to no cost to users to make it competitive, and be partially funded by making the cost of driving reflect its true social cost. But unfortunately we are far from this reality, so transit agencies are often in the position of trying to solve multiple policy problems with limited tools.
Means-testing is at the center of progressive public policy- just usually we think about it as income (individual and corporate) taxes. But transit agencies don’t have the power to levy income taxes. So means-tested fares are one of the few tools they have to raise revenue in a progressive manner (value capture is another idea often discussed).
Multi-modal agencies like the MBTA use mode as a proxy for income in their fare policy with lower bus fares and higher commuter rail fares. But this can be a self-fulfilling cycle reinforcing the existing usage of bus by low-income riders and exclusion of low-income people on commuter/regional rail. It has become a greater concern with the suburbanization of poverty. (Addressing housing affordability is critical, but again one that many transit agencies have limited control over.)
Free transit for all and means-tested fares are popular policy ideas for addressing transit affordability, but let’s continue to remember the root cause of the problem is policies that create vast income inequality. How equitable either idea is in part depends on where the lost fare revenue will come from. For example, replacing all fare revenue with sales tax revenue is likely more regressive than means-tested fare system where higher income riders pay fares. All calls for free or means-tested fares should be associated with a funding source that ensures low-income people aren’t just paying in a different form (and additional funding to increase service since people’s time is also a cost!).
The mix of funding sources for every major transit agency in the country is different. In 2019, the MBTA reported a 44.6% farebox recovery ratio to the National Transit Database while the Los Angeles Metro reported 14.6%. The differences are both local funding sources, rider demographics, and types of services offered. LA Metro doesn’t run the regional rail while the MBTA does.
This means that a one-size fits all policy at the transit agency level isn’t possible. From a funding perspective what works in LA, won’t necessarily work or be equitable in Boston. However, the affordability arguments for free fares or means-testing are universal. The farebox recovery ratio is meaningless to a rider trying to make ends meet every day in LA or Boston or anywhere in-between. And whether a person has to ride a bus or a train to get from A to B (and who operates it) also doesn’t matter.
To me this is a clear indication of the need for federal policies to address both income inequality and public transit funding. Transit agencies only survived COVID because the federal government passed three rounds of emergency funding (totaling ~$70 billion). But this was the first major federal funding for transit operations (not capital) since it was cut by President Reagan in the 1980s. A return to regular federal transit operating assistance, funded by a progressive (and true cost of driving) source, could allow agencies to increase service and either lower fares or implement means-tested fares.
A weedy postscript on fares and federal tax policy
Deep in the weeds of the MBTA’s fare revenue there was a golden egg…
There is another way that the federal government subsidizes transit and that is through the pre-tax deduction for transit fares (and parking at transit lots). While this benefit primarily goes to individuals who work in higher paying jobs whose employers participate in these types of programs, transit agencies benefit as well. I only know the details at the MBTA, and it is worth exploring the COVID impacts and thinking about what changes are needed.
Before COVID the MBTA’s corporate pass program was the golden egg of fare revenue. People could only sign up for monthly passes on a reoccurring basis and the cost was subsidized by the pre-tax payment and often by employers. This allowed the MBTA to set higher commuter rail pass prices. It also meant that often high income riders bought passes for which they didn’t take the number of trips required to break-even at the sticker price. The MBTA got revenue from passes without having to provide all of the capacity they could have represented.
This was equity enhancing only because the MBTA has a weekly bus/subway pass that is roughly ¼ of the monthly to maintain pass access for low-income riders not in the corporate program. And because the agency could use the corporate pass revenue to fund service for the bus/subway pass users riding more than the breakeven point. So commuters (really the federal government and employers) were subsidizing everyday riders.
The COVID pandemic likely killed this golden goose for the MBTA. First, many people turned off their transit payroll deductions during the pandemic and will have to be convinced to resign up. And second, it is likely that some continued remote work will make the monthly pass less attractive, even with the pre-tax benefits. It is also possible that large employers will reduce their subsidies for transit (please don’t).
This means even as ridership returns fare revenue could lag behind, thus creating a structural problem if there isn’t a new source of sustainable operating funds. This new source of funds should also be equity enhancing, not higher fares on the remaining riders.
Clearly the MBTA, and other transit agencies previously reliant on pass revenue, have to rethink their fare structures over the next few years, including different products in their corporate programs (will require technology upgrades). It will also be important to make federal and employer transit benefits more available for lower wage workers in industries where remote work isn’t an option.
I would be curious to hear from folks with knowledge of other agencies’ fare mix if there are similar or different concerns about how fare revenue will return. Do people have suggestions about how to make sure the federal transit pre-tax policy remains a useful tool for transit agencies?
The pandemic might have changed some things, but I think mostly it revealed or exacerbated existing conditions. So far it has not fundamentally changed my view of the future of transportation. Three key realities remain true. One, we have to reduce emissions from transportation to address climate change and air quality. Two, we have a limited amount of public space for mobility and increasing demand for it. (The pandemic intensified the demand with more deliveries and public outdoor space for dining, recreation, and non-motorized transport.) Three, our transportation system is unequal, unsafe and inefficient in both funding and how public space is allocated and enforced. (This past year further illuminated the inequity and violence around enforcement in public space and expanded my definition of safety.)
Maybe because I was a math major in college, when faced with multiple problems I like to find the intersection of their solution sets. In this case, the use, space allocation, and funding for systems of shared transport is clearly in the intersection of all three problems. While the space and emissions benefits of shared transport are fairly clear, shared transport is also important as a place for social integration. I believe it is critical for a multiracial democracy to have places where people safely share space with people from different backgrounds.
Over the past decade my thinking about shared transport expanded. In part because I spent several years living and traveling in the Global South and saw a variety of shared transport systems that have been around for a very long time. And as new technology (e.g. electric scooters) and the ability to book fares on smartphones has created new shared mobility opportunities (and a new place for competition to take place).
As I left my research role in Santiago, Chile I wrote a paper about shifting regulatory frameworks for transportation (presented at Transportation Research Board 2017). My premise is that transport can be framed on two axes: the spectrum of how collective/shared the vehicles are, and the role of the state in providing the service (publicness). This graphic could be updated, but the idea is still useful.
As the graphic shows, shared transport ranges from bicycle sharing to trains that can carry thousands. We need many types of shared mobility to match different land uses, demand levels, and personal preferences. There is no one size fits all regardless of who is operating the service. (I want to start thinking about how urban freight/deliveries fit in.)
Given the intersection of problems we need to shift trips from private motorized vehicles to shared vehicles (and non-motorized modes). The important policy questions are often around what is the role of the state in regulating, funding, and operating each service to achieve this goal and provide equitable access. The graphic illustrates there is an increase in publicness as sharing capacity increases. This is due to the need for large capital investment that lends itself to a public monopoly, but public ownership exists across the sharing spectrum.
I don’t know exactly what the mix of public and privately operated shared transport services will be in the future (or how Autonomous Vehicles will manifest), but regardless of that future it will be essential to have a digital platform that provides users with information about costs, in both time and money for any given trip, and books fares. Many tech companies have figured this out and are trying to be the platform. But it is critical that the platform be owned by the public sector.
Public control is necessary to ensure fair competition, facilitate equitable access, and achieve public policy goals. The digital platform is essentially the marketplace for shared transportation and, especially if there are private operators, the site of competition by giving consumers (comparable) information. The public sector can set the rules for access to the platform, like ADA accessible vehicles or providing service in low-income communities.
A digital information and ticketing platform also provides the mechanism for government subsidy for transportation, either for equity goals or incentives to shift behavior to shared trips. Subsidy could be applied at the trip level, for types of services, or for individuals. Even if some public transit service is free, a platform allows public subsidy for low-income people to make trips where and when high capacity public transit service doesn’t make sense. For example, free transfers to bike sharing controlled by a different entity or a subsidized taxi trip late at night.
Another key reason for public ownership of the platform is to ensure access for cash users as the trend toward smartphone and contactless payments continue. Cash use is needed for under-banked people and privacy reasons. The platform has to be attached to an easy way for people to add cash to accounts that can be used to pay for all forms of transportation.
The MBTA Fare Transformation project is designed to be the foundation of a public platform. After integrating all of the MBTA services together, the plan is to bring in other services and develop joint fare products. The retail and fare vending machine network will provide access to cash users not only to the MBTA, but potentially to other shared mobility options. If all goes according to plan, it is a good example of a public agency acting proactively to protect the public good in the future.