The Princeton Review, College Advisor
The Princeton Review's College Advisor helps you build your ideal college list and find your perfect college fit. Answer a series of questions about academics, geographic preferences, tuition needs, Greek life and more, and College Advisor identifies what type of schools could be in your future. You receive personalized matches for dream, match and safety schools. And you get unfiltered campus reviews from current students.
Responsibilities
Product Strategy, User Research, Creative Direction, User Experience Design, Visual Design
Role
Principal Product Designer
The Challenge
Our target audience, high school students grades 9 through 12, use their smart phones to access and consume a majority of their digital content. In a landscape that is increasingly mobile-first, how does The Princeton Review tap into those usage patterns to leverage our products and services?
Hypothesis
With the problem clearly defined, based on research, we created a working, testable hypothesis: We believe we can increase engagement in our product ecosystem (SAT/ACT test prep, online tutoring, and college admissions) if college-bound high school students successfully understand their undergraduate options with a mobile advisor application.
The Research
We created our hypothesis by breaking it into four distinct parts: the desired business outcomes, the users of our service, the outcomes motivating them, and the product we believe would work in this situation.
Business Outcomes
At the highest level, we wanted to increase the number of users into the acquisition funnel for our products. To increase the size of that funnel, we examined a mobile app that helped guide users in finding a strong college fit, and, based on feedback from over 143,000 students, we leveraged the data on colleges and universities that we use to publish our college rankings and yearly book, The Best Colleges.
The Users
We created a set of user personas on what we believed our target audience was. Based on the company's historical student data, we started with general assumptions about who these people were. Once these were established, we went out into the field, interviewing students, including current enrollees in our classroom test prep services as well as high school students outside of our ecosystem who were college-bound, and tweaked our personas based on ongoing research.
The User Outcomes
Understanding our user was only the beginning. Next we focused on a number of user outcomes; assumptions about what these users were trying to do. What is the user trying to accomplish? I want to find the best college fit for me. How does the user want to feel during this process? I want to feel informed and empowered about my college search. Finally, how does this product get the user closer to his or her goal? I want to feel like I'm making the best decision based on my personal preferences and academic performance.
The Product & Features
Overall, we believed that a mobile application would help address the gap between our target audience and their usage patterns vis-à-vis our content and products. On a more granular level, we focused on which features within the app could create meaningful results for the business and our users employing the same template-style hypothesis statement: We can achieve [w] if [x] can achieve [y] with feature [z]. Using these hypotheses, we were able to prioritize, test and add/remove features based on our research and feedback.
The Design
Once we completed initial research, we were ready to move onto the minimum viable product (MVP). To do that, we first asked ourselves what was the most important thing needed to learn about our hypothesis and the quickest way to do it. This would be the basis of our MVP and the test bed used to run experiments, with outcomes driving the direction of features and whether they should be explored more, refined or removed.
User Flow
I began sketching a simple user flow to get to a working prototype we could test. Based on the prioritized user-outcome list, the flow included the initial features we wanted to validate first. Once completed, I moved on to creating a basic tappable prototype we could use on devices out in the field to gather feedback.
The Prototype
Following the structure of the user flow, I created wireframes for our first prototype. We debated how to deliver our first experience, whether it should be a tappable mockup (Invision, Marvel, etc.) or a coded prototype. We decided real-world functionality, i.e. tapping an on-screen keyboard to enter information and delivering dynamic results, would be a better representation of the actual experience and provide better feedback and learnings for the team. Working with our iOS engineer, we were able to build out a rough, working prototype that included a localized database that was able to display dynamic results based on the students' answers during on-boarding.
Once the build was completed, we deployed it onto our team's phones and we went into the field, interviewing students at our SAT prep course classrooms, various high schools around New York City, and with children of friends who were our target demographic.
Refinements
After a few weeks cycling between feedback, iteration, QA and UAT, we were comfortable with the validated learning, visual fidelity and functionality of the app. We released a 1.0 iOS build with an Android release coming a few months later. Throughout that process, I continued to refine the visual design of the app, ensuring that both the iOS and Android experiences felt native to their respective devices while also ensuring that the apps felt like they were part of the The Princeton Review family of products.
Closing the Loop
After releasing our new product into the app store, we continued conducting in-person testing, monitored user feedback, and deployed in-app and e-commerce analytics. We continued to test new hypotheses based on this research and iterate the app. This included splitting our on-boarding flow into two parts, one before registration and one after. This increased the number of users completing sign-up, reduced drop-off and allowed us another channel (email) for drip-marketing opportunities if they did not fully complete the on-boarding. We also tested push notifications, monitoring how different messaging affected engagement.