I believe great products—and great teams—start with transparency. To help you get to know me better, I’ve shared written responses to questions employers commonly ask during the hiring process. Feel free to scroll or use the buttons below to jump to specific topics

General Screening Questions

  • Im a Senior Product Manager with over 10 years of experience leading supply chain, procurement, and logistics technology initiatives across Amazon, Whole Foods Market, and Danaher. My background is rooted in both business and technical domains—I specialize in building scalable, customer-centric software products that streamline operations, reduce costs, and drive measurable impact.

     

    At Amazon, I currently lead product for the supply chain tech stack supporting over 530 Whole Foods Market stores and 10 distribution centers. My scope spans ordering, inventory management, procure-to-pay, and receiving systems—both mobile and web-based—used by 30,000+ team members. I’ve launched large-scale initiatives including the automation of online grocery replenishment using robotics (Micro Fulfillment Centers), variable weight ordering for perishable goods, and the Supplier Freight Initiative—a $6M TMS platform that streamlined Amazon’s inbound logistics and drove a 77% increase in throughput.

     

    Prior to Amazon, I worked at Danaher in the medical device space, where I led global procurement strategy and managed a $115M spend portfolio, delivering $10M+ in savings through supply chain optimization, strategic sourcing, and automation.

     

    Throughout my career, I’ve consistently worked at the intersection of product, operations, and engineering—translating complex problems into intuitive systems that scale. I bring a hands-on leadership style, deep domain expertise in ERP/TMS/WMS systems, and a track record of launching solutions that have delivered over $50M in cumulative business impact.

  • Yes, I am currently employed at Amazon and have been since March 2020 (5+ years). My wife and I got married in 2024, and we’ve aligned that our desire is to be closer to family in the Denver area as we begin the next chapter of our lives—buying a home and starting a family of our own. While I would love to continue working for Amazon, it’s uncertain whether I can retain my current role while working from the Denver office. I am actively discussing this possibility with my leadership, but in parallel, I am exploring other opportunities. I find my current role both challenging and fulfilling, and I have a great manager. However, if it comes down to choosing between staying at Amazon or living closer to family, my priority will be my family. There isn’t a strict deadline for finding a job, but my wife and I plan to move to Denver sometime in 2025—ideally sooner rather than later.

  • I am primarily targeting Principal or Senior-level Technical Product roles focused on Supply Chain Systems, as this best aligns with my expertise and interests. Beyond that, I am also open to Principal or Senior-level Technical Product roles in broader technology domains. Additionally, I am considering Director, Senior Manager, or Program Manager positions with a focus on supply chain areas such as procurement, logistics, operations, and transportation. While my preference is for technical product roles, I am also open to non-technical product roles if they align with my skills and offer the right opportunity.

    Additionally, I would consider consulting roles, provided that travel is limited to less than 25%. I am open to remote, hybrid, or in-office roles within 50 miles of the Denver metropolitan area and am flexible between Individual Contributor and People Management positions.

  • I have experience across multiple industries, including grocery, retail, e-commerce, technology, manufacturing, transportation, and medical devices. My primary interest lies in solving complex problems within these spaces, particularly in supply chain domains. I am also interested in SaaS products for both B2B and B2C markets.

  • Beyond a competitive salary and benefits, my top priorities in a new role are challenging and meaningful work, autonomy and ownership, opportunities for career growth, and strong management and leadership. I find that I am most fulfilled in a job when these aspects are present.tem description

  • What sets me apart is my unique blend of operational depth and product leadership—paired with a track record of delivering scalable, tech-enabled solutions across highly complex supply chain environments. I don’t just build features—I design systems that simplify complexity, drive measurable business value, and scale across enterprise workflows.

     

    I’ve led product strategy for initiatives involving hundreds of engineering weeks and multiple business functions—from roadmap development through execution—navigating ambiguity, aligning with senior stakeholders, and shipping outcomes that improve efficiency, accuracy, and user experience. Whether it’s building integrated tools for store and DC ordering, streamlining variable weight processes, or launching governance models for buyer workflows, I consistently deliver end-to-end ownership and results.

     

    I also bring a deep understanding of how technology can empower operations—rooted in hands-on experience with internal tool development, API integrations, and cross-platform planning. My ability to connect the dots between data, systems, and human behavior allows me to design experiences that work not just for the business, but for the people who rely on them every day.

     

    Ultimately, I bring a builder’s mindset, a bias for collaboration, and a track record of turning complex operational problems into elegant, user-focused solutions—qualities I believe are essential to helping Grainger evolve its enterprise platforms and serve customers more efficiently at scale

  • own the Amazon Grocery Supply Chain systems, though I spend most of my time within the Ordering and P2P systems. This includes In-Stock Manager-facing dashboards, team member-facing web applications, and mobile applications (hosted on Honeywell CT60 devices). These tools support approximately $21.1 billion in customer sales, drive cost-reduction initiatives that yield annual savings of $20–50 million, and provide essential functionality for Amazon Grocery businesses, including Whole Foods, Amazon Fresh, and Amazon Go. 

     

    My customers are the end users, including Whole Foods team members in stores, DC & Retail Operations, DC purchasing, and In-Stock teams, spanning across five VPs on the Whole Foods side. On the Amazon tech side, my team, Grocery Supply Chain Tech (GSC), consists of Software Developers, Software Development Managers (SDMs), Technical Program Managers (TPMs), and Technical Product Managers (PMTs). I operate in a hybrid role across PMT, TPM, and SDM functions. 

     

    I lead the development of a three-year roadmap by producing 3YR plans, Press Release Frequently Asked Questions (PRFAQs), Business and Product Requirements Documents (BRDs), and supporting technical scoping and architectural planning through High-Level and Low-Level Designs. I also oversee User Acceptance Testing (UAT), participate in SCRUM and sprint planning with engineering teams, and coordinate technology rollouts, ensuring seamless integration into business operations. My role serves as the glue across the Software Development Lifecycle (SDLC), bridging business stakeholders, senior leadership, and engineering teams to guide technical delivery from requirements to deployment.

  • My day-to-day work varies based on where we are in the product development lifecycle for a given feature. I manage a portfolio of 10-15 major projects, each with a complex set of features. On any given day, I might be demoing a feature to business stakeholders, creating training materials for end users, writing requirements for an upcoming initiative, building UX mocks in Figma, putting together analysis via SQL queries out of Data Warehouses, or drafting a tradeoff document to align stakeholders on a problem and recommend a path forward.

    During the design phase, I ensure that the design meets business requirements and provide feedback on components or services that can be leveraged. If we are piloting a feature, I focus on troubleshooting issues, identifying root causes, and working with engineers to deploy fixes while minimizing store impact. Regardless of the phase, my work consistently involves collaboration with business teams, developers, and senior leaders across Amazon and Whole Foods. Additionally, most days require a mix of highly technical writing and conceptual documentation to ensure end users and customers clearly understand what to expect from an upcoming launch.

    Ultimately, my role is about ensuring we are solving the right problem with the right technology, that engineers have a deep understanding of the requirements to build an effective solution, and that we successfully release scalable, value-added features that drive revenue, reduce costs, and improve operational efficiency.

  • I would describe my leadership style as structured yet flexible

    Structured: I focus on establishing and following well-defined processes to approach work methodically. For example, I firmly believe that every feature, regardless of its size, must have a business requirements document (BRD) with formal sign-off from stakeholders. When working with SDMs or SDEs, I emphasize the importance of setting clear timelines for milestones such as design, implementation, QA testing, UAT, and pilot phases. These mechanisms ensure transparency with stakeholders and accountability for both product and engineering teams.

    When changes arise that affect the roadmap or the timelines of associated features or products, I require the creation of a trade-off document. This document outlines the options, assesses their impact, gathers stakeholder feedback, and facilitates a group decision rather than leaving it to an individual. Additionally, I ensure we have at least bi-weekly written program reviews and meetings with stakeholders. These sessions provide a platform for stakeholders to voice concerns, share feedback, and make recommendations to guide the project in the right direction. Artifacts like these are essential for product management, as they help ensure we build and deliver what the customer truly needs.

     

    Flexible: While I value structure, I also believe in providing my team with autonomy. My goal is to equip my team with the tools and resources they need to succeed without micromanaging their work. I aim to be the person my stakeholders and team members want to involve in solving problems—not someone they feel obligated to include.

     

    When challenges arise, I expect team members to analyze the situation, propose potential solutions, and present their perspective on the best course of action. This collaborative approach encourages critical thinking and debate while fostering ownership. At the same time, I hold employees accountable for the deadlines they commit to. If dates are missed, they are communicated and documented, along with actions to get the project back on track.

    I operate as a servant leader with the following principles:

    1. Empathy: I prioritize understanding the needs and concerns of my team members.

    2. Selflessness: My growth comes from building up and motivating my team. If help is needed, I will work alongside them to resolve challenges.

    3. Empowerment: I provide employees with the resources and guidance they need to work autonomously and deliver results.

    4. Humility: I foster open communication, encourage feedback, and actively recognize and showcase my team’s achievements.

    5. Commitment to Growth: I am dedicated to the personal and professional development of my team. I strive to identify what excites them, provides opportunities for growth, and aligns with their sense of purpose and fulfillment.

    By balancing structure with flexibility, I aim to create an environment where both the team and stakeholders can succeed while maintaining a focus on transparency, collaboration, and growth.

  • The past five years at Amazon have been nothing short of fast-paced. I’ve contributed to building supply chains under tight timelines, negotiating contracts with urgency, and launching supply chain applications under strict time constraints. Throughout my career, I’ve learned that solving problems often comes down to effectively leveraging people, processes, or technology. In fast-paced environments, tradeoffs among these elements are inevitable. From a product perspective, this means prioritizing P0, P1, and P2 requirements, gathering customer feedback early, and iterating quickly using Agile methodologies.

     

    Soft skills are equally critical in such environments. I rely on a bias for action, making informed decisions without waiting for 100% of the data or requirements, earning trust with my team and stakeholders, and deeply understanding customer needs upfront. At a future employer, I would use my product management toolset—including BRDs, three-year strategic planning, customer feedback loops, and data analysis—to navigate challenges and drive results. When faced with a problem, I first attempt to resolve it independently or with available resources. If additional support is required, I escalate appropriately to involve leadership or secure the resources and talent necessary to address the issue effectively

  • Under my supervision I directly supervise a small team of 5 with titles of Procurement Engineer, Sourcing Specialist, Category Manager, and Procurement Coordinator.  Their primary duties are to monitor and control the supply base, problem solve quality issues, identify and qualify new sources, lead cross function savings projects, help meet the quality and compliance objectives, negotiate agreements, and ultimately help manage the supply base.

  • The thing I enjoy most about solving cross functional issues, is the ability to see how your product interacts over multiple functions.  You get to see aspects of the customer, how the backend code functions, how integration occurs, all throughout supply chain, procurement, operations, finance, and accounting aspects.  I enjoy bringing people together in my personal life and being able to do that at work is fun.

  • Yes, I strategically manage product roadmaps and drive agile development across five engineering domains, indirectly leading the work of over 30 software engineers. My primary customers include In-Stock and Retail Operations teams, Distribution Center Operations, retail store team members, and ultimately the Whole Foods Market customer. I work closely with both internal and external stakeholders, ensuring that our product vision aligns with business objectives while maintaining feasibility for engineering teams. I prioritize features based on business impact, technical complexity, and operational urgency, ensuring alignment across teams with competing priorities.

  • I have experience using Tableau, Power BI, and Excel for reporting and analytics. However, in my current role, the majority of our reporting is built on AWS QuickSight, which is Amazon’s equivalent to Tableau. I am also proficient in SQL for data extraction, transformation, and analysis

  • Yes, my current role includes ownership of user-facing web dashboards and mobile applications that optimize supply chain operations at Whole Foods Market. These systems support inventory management, supplier scheduling, and store ordering processes. On the web application side, I manage tools that allow In-Stock teams to configure target inventory positions, manage supplier schedules, override forecasting outputs, and handle ordering exceptions to improve replenishment accuracy.

     

    On the mobile application side, I oversee Android-based tools that help store teams cycle count inventory, receive products, record and financially evaluate shrink, and determine just-in-time reorder quantities. These applications enable purchasing for both first-party and third-party suppliers via EDI integration, ensuring that the right products are delivered to the right stores in the right quantities at the right time.

     

    Beyond store operations, I manage systems responsible for financial reconciliation, including three-way matching between receipts, purchase orders, and invoices to ensure accurate supplier payments. Additionally, I oversee Infor’s Warehouse Management System (WMS), which handles inventory management, allocation, and fulfillment across distribution centers and stores. Many of the systems I manage are Amazon-built, AWS-based microservices developed using JavaScript and REST APIs, and deployed across AWS cloud services such as DynamoDB, S3, SNS, SQS, SES, and Lambda.

  • I am a U.S. citizen and do not require work sponsorship now or in the future.

Leadership Principals & PM Capability

  • Situation: One of my most impactful projects was the Micro Fulfillment Center (MFC) Automation initiative, where Amazon piloted Automated Storage and Retrieval Systems (ASRS) at a Whole Foods Market location on the East Coast. These robotic racking systems are designed to store frozen, chilled, and ambient products, with the primary goal of reducing operational costs by shifting online order fulfillment from in-store shoppers to automated systems.

     

     

    Task: I was tasked with defining and implementing automated replenishment workflows—a completely new concept for Whole Foods. Our grocery supply chain team was looped into the initiative relatively late, around February 2024, and we were working toward a code completion deadline of October 2024. The scope was ambiguous, with limited documentation and cross-functional alignment at the outset. In addition to replenishment logic, we needed to account for capacity constraints, supplier scheduling, inventory valuation, and purchase order transmission—none of which had previously been handled in an automated environment at Whole Foods.

     

     

    Action: Given the tight timeline and lack of definition, I took a structured and pragmatic approach to reduce ambiguity and accelerate execution. First, I broke down the scope into key functional areas:

    •             Inventory Management

    •             Reorder Quantity (ROQ) Generation

    •             Supplier Scheduling & Receiving

    •             Purchase Order (PO) Creation & Transmission

    •             Accounting & Financial Reporting

    •             Induction Processes

     

    I created clear requirement buckets and used them to facilitate focused conversations with stakeholders. I then led requirement-gathering sessions with five VP-level stakeholders and their teams across supply chain, tech, finance, and operations. To drive alignment and descope intelligently, I introduced a priority framework:

    •             P0: Critical for launching by October 2024

    •             P1: Needed to scale to full volume at a single site

    •             P2: Required to expand to multiple MFCs

     

    This helped us quickly agree on minimum viable scope while setting expectations for future phases. For example, P0 included automated replenishment logic, P1 introduced capacity guardrails, and P2 focused on expanding selection and optimizing workflows further.

     

    Result: By decomposing the problem and prioritizing features, we stayed on track for code completion by October 2024. We delivered incremental value, launching capacity guardrail solutions in Q1 2025, and deployed the ASRS equipment along with key automated supply chain components in December 2024. This enabled a seamless customer experience, allowing customers to order Fresh and Whole Foods items in a single checkout, all fulfilled through automation.

     

    The impact has been significant:

    •             Support for over 21,000 SKUs in the system

    •             Projected to generate $10M in online revenue

    •             Expected to reduce fulfillment costs by 40%, equating to $3.5M in annual savings

  • Situation: I was leading the development of Inbound Supply Chain features for Automated Buying and Receiving as part of the Micro Fulfillment Center (MFC) initiative—an SVP-level priority.

     

    During the Business Requirements Document (BRD) review, multiple concerns emerged around the risk of over-ordering relative to available space, product temperature constraints, and FDA storage criteria. These risks required a separate BRD and design track for capacity guardrails. After evaluating timelines, it became clear that the long-term capacity management solution would not be ready in time for the January 2025 inventory ramp-up.

     

    On top of this, we faced tight code completion deadlines, limited QA time, and the added complexity of implementing systematic capacity controls, something that had never been done before in Whole Foods Market’s supply chain systems.

     

     

    Task: Despite these challenges, it was essential to deliver a working solution to support the ASRS (Automated Storage and Retrieval System) inventory ramp. Without some form of capacity control, there was a real risk of overfilling the system, which could cause fulfillment failures or violate temperature/FDA requirements. I needed to find a path forward—quickly.

     

     

    Action: To mitigate the risks posed by the delayed long-term solution, I designed an interim, lightweight capacity guardrails solution. This “scrappy” contingency plan enabled business users to request capacity truncations at the ASIN level by submitting SIM tickets to engineering.

     

    The temporary system allowed teams to:

    1.     Analyze and allocate ASRS space across Amazon Fresh, WFM Core, and WFM ASINs.

    2.     Prioritize SKUs most critical to program success.

    3.     Determine maximum reorder quantities per ASIN to fit within the available capacity.

     

    This manual, configuration-based approach allowed us to move forward with testing, without waiting for the full automation pipeline to be ready. It was not as scalable or elegant as the long-term plan, but it bought us critical time.

     

     

    Result: While the interim solution did require trade-offs—such as limited SKU coverage and more manual configuration—it enabled the successful ramp-up of inventory into the ASRS in time for the January 2025 milestone.

     

    Thanks to this stopgap, we were able to:

    •             Maintain momentum on the MFC pilot

    •             Mitigate the operational risks of over-ordering

    •             Deliver a solution that worked within our constrained timeline

     

    My proactive planning and willingness to compromise on scope while preserving critical functionality ensured we hit our delivery goals without compromising safety or customer availability. This experience reinforced that delivering results sometimes means getting creative, being pragmatic, and always staying one step ahead of risk.

  •  

    Situation: I was leading the development of Inbound Supply Chain features for Automated Buying and Receiving as part of Amazon’s Micro Fulfillment Center (MFC) initiative. This program supported a Senior Vice President’s strategic goal and was not communicated until Q1, yet required full delivery by October 2024—making speed and precision essential across product, engineering, and testing.

     

     

    Task: After completing product scoping and aligning on requirements, I realized that QA testing had become a critical risk area. While not traditionally within my product role, I saw that our limited QA resources and newly onboarded contingent QA engineers lacked context on Whole Foods Market’s supply chain systems and were unsure what and how to test. To ensure we could deliver high-quality, production-ready code on time, I decided to step in and own the testing enablement effort.

     

     

    Action: I authored a comprehensive QA enablement guide that included:

    •             A program overview and business context

    •             A breakdown of newly developed features and integrations

    •             Recommendations for testing scope, coverage, and expected system behaviors

     

    I also spent over 10 hours training the QA team, answering questions, and walking them through system workflows, dependencies, and use cases. I encouraged them to build a requirements traceability matrix, linking test cases directly to product requirements to ensure coverage and accountability. Throughout the process, I remained a hands-on resource, providing context on edge cases, validating bugs, and ensuring the team had what they needed to be successful.

     

     

    Result: The QA team was able to create a robust testing plan, which was reviewed and approved by stakeholders. The plan surfaced several critical bugs, which were resolved prior to the October launch.

     

    Because of my initiative to step outside my core responsibilities, we were able to:

    •             De-risk a major launch tied to senior leadership goals

    •             Deliver high-quality code on time

    •             Establish repeatable QA processes for future workstreams

     

    This experience demonstrated my commitment to ownership and ensuring end-to-end success, even when it meant going beyond my formal role.

  • Situation: Every year, Product teams at Amazon participate in Operational Planning, known as OP1/OP2. This annual cycle is critical for setting the vision and priorities for the following year. The process includes aligning on key initiatives (“big rocks”), scoping technical solutions, T-shirt sizing to estimate effort, and documenting rationale for what will and won’t be built. The outcome determines how resources are allocated across business and tech teams.

     

    During the 2025 OP1 cycle, I was responsible for defining priorities within the grocery supply chain space, which spanned multiple domains with often competing needs and a tight deadline to align both business and tech leaders.

     

     

    Task: My task was to create a clear, prioritized set of initiatives for 2025, backed by data and cross-functional alignment. I needed to gather input across 30+ stakeholders, surface conflicting priorities, and drive consensus—all while working against a tight planning timeline. The final output would directly inform funding, headcount allocation, and roadmap commitments for the upcoming year.

     

     

    Action: To get ahead of ambiguity and keep the process moving quickly, I first developed an independent draft proposal that included:

    •             A breakdown of initiatives by supply chain domain

    •             Business justification tied to broader goals (reduce cost to serve, increase revenue, improve in-stock, labor efficiency, and shrink)

    •             Clear problem statements, dependencies, impacted subteams, entitlement assumptions, and a first pass at the technical scope

     

    With this foundation, I then hosted three four-hour workshops with over 30 business stakeholders across five VP-led orgs: In-Stock, Retail Ops Perishables, Center Store, and Culinary. These sessions provided space for teams to propose new ideas, ask questions, and debate priorities. When misalignment surfaced between domains, I asked targeted questions to identify tradeoffs and reframe conversations around shared outcomes.

     

    After capturing business needs and priorities, I translated them into technical requirements and led 12 hours of working sessions with ~10 developers. We used a T-shirt sizing approach to estimate scope and effort, while also inviting the dev team to suggest ideas they were passionate about delivering. This helped ensure technical feasibility and engineer buy-in.

     

     

    Result: With business input, developer sizing, and a clear view of available capacity, I created a consolidated roadmap that prioritized the highest-impact work. I packaged everything into a comprehensive planning document, which was shared with business and tech leadership for roll-up across broader tech teams and final approval.

     

    My structured and collaborative approach enabled us to:

    •             Align five VP-level organizations on shared priorities

    •             Incorporate feedback from 30+ stakeholders

    •             Translate strategy into a roadmap with three years’ worth of scoped and prioritized development work

     

    The final output gave leadership confidence in our tradeoffs and decisions, and positioned our teams to execute against meaningful, measurable outcomes in 2025.

  • Situation: When I first joined my role supporting the Whole Foods Market supply chain, there was no dedicated Product Manager overseeing perishables ordering. There was no clear roadmap—just a scattered list of feature requests with no alignment on why they were needed or what business value they would drive.

    One high-friction area that stood out was in perishables selection, especially in categories like Meat and Seafood. Store team members were using manual order guides to track inventory and write orders, spending over an hour per day on this task. The process was highly error-prone, leading to frequent stockouts or excess inventory, which drove up shrink.

    Task: I was tasked with creating a strategic roadmap for this space. I chose to start by focusing on semi-automating perishables selection—something with clear pain points and visible opportunity for transformation. But before proposing a solution, I needed to fully understand the existing process and define the why behind any investment.

    Action: followed a structured, customer-obsessed approach:

    •             I went to Gemba—visiting stores over a two-month period to observe ordering workflows firsthand.

    •             I mapped each step of the process, timed how long each task took, and interviewed dozens of store team members to capture pain points and improvement ideas.

    From that research, I created a whitepaper that framed the problem and outlined a solution:

    •             Problem: Manual, time-intensive ordering led to inventory inaccuracies, stockouts, and shrink.

    •             Proposed Solution: A digital, semi-automated ordering tool integrated with backend systems.

    •             Benefits: Improved in-stocks, reduced shrink, and labor savings.

    •             Projected Impact: An estimated $20M in annual savings.

    This analysis catalyzed what became the Variable Weight project.Working with engineering, we built the Store Ordering Tool (SOT)—a mobile application on Honeywell CT60 devices that:

    •             Displayed a daily dynamic order list, eliminating paper guides.

    •             Allowed bulk edits and individual item adjustments with reason codes for analytical feedback.

    •             Integrated with backend systems to generate accurate recommended order quantities, driven by data from forecasting, real-time inventory, sales, catalog, and item inputs.

    To ensure the tool’s accuracy and value, we also overhauled key backend infrastructure:

    •             Rebuilt forecasting logic for perishables

    •             Improved real-time inventory tracking

    •             Enhanced catalog and sales data pipelines

    Result: We launched the SOT solution in 2023 and continue to enhance it through a multi-year roadmap. In 2024, we added DC out-of-stock awareness, which enabled the tool to:

    •             Detect unavailable items at distribution centers (1P DCs)

    •             Surface UI-driven substitution workflows to prevent failed orders and improve fill rates

    This project has fundamentally modernized how perishables ordering works at Whole Foods. Beyond operational efficiency, it has elevated the ordering experience, reduced errors, and positioned us for long-term scalability across fresh departments. Now, 530 stores no longer rely on manual ordering, cutting ordering time in half, improving in-stock rates, and reducing shrink. Since launch, the initiative has delivered over $20M in savings, with continued enhancements planned to further optimize the process.

  • Situation: In 2023, I received feedback that I tend to overemphasize certain leadership principles—particularly Insist on the Highest Standards and Ownership—to the point that it impacted my work-life balance and made it difficult for others to fully take ownership.

     

    Looking back, the feedback was valid and came during an incredibly challenging period. We were navigating significant ambiguity, ongoing employee turnover, resource constraints, Amazon-wide layoffs, and a divide between the WFM and Amazon organizations. Stakeholder engagement was low, and the path forward was often unclear. In response, I pushed myself hard to deliver results on behalf of the business and my tech team, which led to burnout and frustration. This was a recurring theme throughout my time at Amazon, stemming from my internal drive to succeed—regardless of the circumstances.

     

    Task: The feedback challenged me to reflect on how I was showing up—not just as a contributor, but as a leader of others. It became clear that if I wanted to scale my impact and protect my own sustainability, I needed to shift from doing everything myself to empowering others to lead.

     

    Action: In 2024, I made a deliberate effort to train and enable others to take on more responsibility. I began delegating more frequently, not just tasks but full areas of ownership, even when I knew it might mean accepting a higher risk of failure.

     

    When teammates or stakeholders weren’t able—or willing—to step up, I would still jump in to ensure goals were met, but only after giving others the first opportunity. I also worked to adjust my mindset, realizing that sharing ownership, even when imperfect, is essential for long-term success and team growth.

     

    Importantly, I didn’t abandon my high standards or sense of ownership—I simply became more thoughtful about when and how I applied them. I also began setting clearer expectations with stakeholders about shared responsibility and worked to foster accountability across the team, rather than taking it all on myself.

     

     

    Result: By investing in others and letting go of the need to control every outcome, I saw my tech team and stakeholders grow in their capabilities and confidence. Tasks that previously required my direct involvement were being handled independently.

     

    I’m still someone who’s willing to step in when needed—but now, my default is to delegate first, then support as necessary. I’ve learned that when you always show you’ll catch every dropped ball, people may stop trying to catch their own. Leadership means setting high standards and trusting others to rise to them—even if it takes time and patience.

     

    This feedback and the growth that followed helped me become a more scalable leader, and someone who earns trust not just by delivering, but by enabling others to do the same.

Product

General Questions & Mental Models

  • The start of product development begins with identifying the problem we are trying to solve and understanding why it matters. At Amazon, this initial groundwork is formalized in a PRFAQ (Press Release Frequently Asked Questions) document. This document presents a vision for the product, written as if the company were announcing it publicly. It outlines the problem, the target market, the business value, and why it’s a strategic investment for the company. The PRFAQ serves as a guiding document to align stakeholders on the "why" and the desired future state.

     

    Once the vision is clear, we move to defining requirements for the individual features that will bring the product to life. This process starts by working backward from the desired user experience. User stories and detailed requirements specify the "what" and "why," including the functionality and elements needed to achieve the end state. Requirements often include Figma UX mocks to illustrate the front-end design and prioritize features with P0s identifying must-haves for the product.  It also contains the input and output metrics to determine success.  Normally, these metrics are revenue growth or cost savings, measured in operating profit. This document is reviewed and approved by all relevant stakeholders to ensure alignment.

     

    Following the requirements phase, we collaborate with engineering teams to design the solution. This typically involves creating 1–2 high-level designs and several low-level designs (5–10, depending on complexity). Once the design is finalized, we estimate the level of effort during grooming sessions, sequence the development tasks, assign resources, and create an implementation timeline. The timeline includes key milestones for code completion, QA testing, user acceptance testing (UAT), and the rollout plan.

    Development is carried out in two-week sprints for incremental delivery. As features are completed, they undergo QA testing in lower environments. This process includes regression testing for existing functionality, net new testing for new features, and load testing to ensure system stability. After QA, we transition to user acceptance testing, where a subset of users validates the new features and provides feedback. Bugs are fixed, the feature backlog is reprioritized, and a go/no-go decision is made for piloting in production.

     

    In the pilot phase, the software is tested in real-world conditions, additional bugs are resolved, customer feedback is obtained, metrics are tracked and features are iterated as needed to achieve an MLP (Minimum Lovable Product) state. Once the product meets its intended goals, we scale the solution to broader customer and user bases.

     

    In parallel, we implement a change management and enablement process to train users on the product, its features, and the new user experience. This ensures smooth adoption and maximizes the product's impact post-launch.

  • Prioritization starts with aligning on what is most important to the customer at the current moment. Changes in priorities must be guided by structured trade-off decisions that align with business objectives. Key considerations include customer value, entitlement, timelines, cost, and resource availability. Creating a trade-off document is critical; it outlines available options, their implications, and a recommended path forward. This document should be reviewed and approved by all relevant stakeholders to ensure alignment and clarity.

     

    Once a trade-off decision is made, its downstream impacts must be clearly communicated. This includes adjustments to the timeline for de-prioritized items, resource reallocation, and transparent communication to internal teams and customers about shifts in focus or delivery expectations. The decision-making process should be anchored in identifying the option that provides the most value to the business, typically evaluated against key metrics like revenue generation, cost savings, or customer impact. By doing so, trade-offs become actionable, justifiable, and effective in advancing the broader goals of the organization.

  • A bit of both. I am a product specialist in the sense that I am a subject matter expert in the procurement domain for Whole Foods from a technology perspective. I have a deep understanding of my product’s users, stakeholders, the technology architecture, and the end-to-end system landscape within this domain. This ownership allows me to effectively manage and optimize the procurement space.

     

    At the same time, I operate as a generalist by understanding how inputs from other systems and products, owned by different teams, interact with and impact my domain. For example, while I don’t own the customer checkout process via POS systems, I leverage POS data to train forecasting models that are critical to procurement. This cross-domain understanding enables me to collaborate effectively and ensure seamless integration between systems.

     

    I flip back and forth between both roles, to meet the needs of what specifically we are working on.  I am also assigned to other areas, outside of my domain and support other supply chain functions

    Additionally, I would consider consulting roles, provided that travel is limited to less than 25%. I am open to remote, hybrid, or in-office roles within 50 miles of the Denver metropolitan area and am flexible between Individual Contributor and People Management positions.

  • Traditionally, we employ both qualitative and quantitative measures for feature launches. Change Enablement focuses on gathering qualitative feedback related to customer experience, satisfaction, and employee engagement, often through team member surveys using a 1-5 scale with space for written testimonials.

     

    In parallel, we track input metrics such as adoption rates, reason codes, and trouble tickets submitted for issues. These feed into output metrics like cost savings (labor efficiency or shrink reduction) and revenue enablement (improvements in in-stock levels).

    For example, when we launched the DC Out-of-Stock and Substitution features, which made the Store Ordering Tool out-of-stock aware and enabled a replacement workflow, we collected both qualitative and quantitative data.

     

    • Qualitative Feedback: Responses were mixed. Some stores praised the new workflow, providing strong testimonials, while others flagged accuracy issues with out-of-stock data due to catalog input discrepancies at specific DCs and stores.

    • Input Metrics: Adoption was ~95% in areas without catalog issues, while in areas with data discrepancies, adoption dropped to ~50%.

     

    Both datasets were instrumental in identifying and root-causing the catalog issues, enabling us to correct the inputs and improve accuracy.

     

    While I view both qualitative and quantitative data as essential, I prioritize building robust qualitative feedback mechanisms before a full launch to identify potential issues early. I also favor running pilots with a limited set of stores for a sufficient period to minimize risk, reduce the blast radius, and enable incremental delivery.

  • 1:1s with peer from each group periodically, consistent communication, listening to their problems and helping to address it.

     

    I build trust with my team and stakeholders through consistent communication and by delivering results. As a product leader, I send bi-weekly updates across the buying domain, segmented into three areas: Store Buying, DC Buying, and Automation. These updates include a visual roadmap by quarter and a detailed table outlining projects with milestones such as Requirements, Technical Scoping, Design, Implementation, QA Testing, Pilot, and Rollout. Each project is accompanied by a written narrative, status (Green, Yellow, Red), and alignment to our goals.

     

    These bi-weekly updates drive a meeting with 50+ stakeholders, typically attended by representatives from In-Stock, Retail Operations, Accounting, DC Operations, Product, and other business functions. The forum provides transparency on timelines, risks, and priority changes while enabling stakeholders to ask questions, voice concerns, and share feedback. This open dialogue often leads to actionable outcomes, ensuring stakeholders feel heard and see their input directly impact the products my team develops.

     

     

    The bi-weekly updates also highlight features delivered, showcase developer contributions, and foster accountability. While challenges such as scope creep, unforeseen issues, or delays inevitably arise, I maintain trust by providing consistent updates, addressing risks transparently, and ensuring alignment on next steps. This approach keeps stakeholders informed, engaged, and confident in my team’s ability to deliver quality results.

     

  • Mentoring and guiding product managers begins with providing a high-level vision for what we are trying to achieve, why it matters, and how each team member contributes to that vision. Employees need to feel that their contributions are driving toward a larger purpose. This vision often starts with organizational goals, such as S-Team objectives, and is then broken down into smaller, actionable goals and milestones. By doing this, we ensure that everyone is aligned, understands their role, and is marching toward the same objectives.

    Once the vision is clear, I work with the team to define team tenets that guide how we operate. For example, some of the tenets my team currently follows include:

     

    • The buyer is our customer.

    • Do not outsource core competencies.

    • Respect the store and wear the apron.

    • We Deliver incrementally.

     

    These tenets establish a shared foundation for decision-making and ways of working, ensuring consistency and alignment across the team.

     

    With a clear vision, individual roles, and operating principles in place, I focus on establishing or leveraging mechanisms to guide and support our work. For example:

    • Standard artifacts: These include tools like 3-Year Plans, PRFAQs, BRDs, HLDs, LLDs, RTMs, QA Plans, UAT Plans, Pilots, Launch Announcements, and Rollout Plans.

    • Communication cadence: Regular touchpoints such as 1:1s with key stakeholders, bi-weekly roadmap reviews with customers, bi-weekly meetings with engineering teams, and QBRs/MBRs for visibility and alignment.

    This framework provides structure and repeatable processes that ensure consistency in delivering results. Product managers are trained to use these tools and mechanisms while tailoring them to their specific customers or stakeholders.

     

    As the most tenured individual contributor on my team, I’ve taken a hands-on role in onboarding and mentoring less experienced or new product managers. From a technical standpoint, I train new PMs on our systems architecture, stack, and the design and purpose of each microservice or application. From a product standpoint, I help them leverage internal wikis that house gold-standard artifacts like 3YPs, PRFAQs, and BRDs. I guide them in using these tools, review their work, provide feedback, and even sit in on their stakeholder meetings to offer constructive recommendations for improvement.

     

    For example, this year I mentored a new team member, Ayesha, who successfully executed her first feature launch. While there were some challenges along the way, the tools, training, and feedback provided enabled her to deliver effectively. This experience reinforced the importance of structured onboarding, hands-on guidance, and continuous support in helping PMs achieve success.

    Through this approach—combining vision, structure, mentorship, and collaboration—I aim to build confident, capable product managers who drive collective success for the team and organization.

  • KPIs are valuable indicators that help determine whether a system is performing as expected. For example, one of the key KPIs we track for my product is in-store availability—whether a product is in stock for customers to purchase. This metric is critical as it directly reflects our ability to keep products on shelves, which drives revenue generation.

     

    While KPIs provide a high-level view, such as signaling whether we are "green" or "red," they often don’t explain why a metric is underperforming. For this reason, we rely on additional metrics and analyses to identify the root cause of issues, particularly when the KPI indicates a problem, such as out-of-stock scenarios.

     

     

    We approach this in a couple of ways:

    1. Reason Codes
      Reason codes help us determine whether a product was recommended for ordering by our software but wasn’t ordered by a team member. If the recommendation was correct but the order wasn’t placed, it’s likely due to gaps in team members’ understanding of how to use the tool. Addressing this involves targeted training and improving tool usability.

    2. Input Accuracy
      Accurate inputs are crucial for reliable outputs. If inputs are flawed, it’s essential to identify which ones caused the issue. For example, we use metrics like forecast accuracy (e.g., Mean Absolute Percentage Error, or MAPE) to evaluate whether incorrect forecasting contributed to the out-of-stock issue. Other potential causes we investigate include supplier stockouts, transportation delays, or systemic errors.

     

    By understanding the root cause of out-of-stock situations through these methods, we can implement specific countermeasures to address the issue. This approach not only resolves immediate concerns but also ensures continuous improvement in how our tools perform and meet customer needs.

  • Cost Accounting: used to track and allocated costs associate with producing and moving goods throughout the supply chain.  I am familiar with Bill of Material (BOM) costing, labor and overhead allocation, standard vs actual cost accounting, and variance analysis such as purchase price variance (PPV).

    Inventory Accounting: used for the financial evaluation of raw materials, work-in-process, and finished goods.  I am familiar with cycle count adjustments, inventory write downs and obsolesce, and shrinkage and spoilage accounting.

    Accounts Payable: process payments for goods and services purchased.  I am familiar with 3-way matching (purchase order, receipt, invoice), managing supplier payment terms, and handing returns or credit memos.

    Accounts Receivable: use to handle incoming payment from customers, especially for logistics providers or supplies. I am familiar with invoicing based on shipment terms, tracking outstanding payments, and credit terms.

    Procurement Accounting: Supports the purchasing process with budget tracking, spend analysis, and supplier performance form a cost perspective.  I am familiar with Capex vs Opex, Accruals for goods received but not yet invoiced, and purchase order tracking.

    Logistics Accounting: use to capture and allocate transportation and warehousing costs to the right cost centers.  I am familiar with freight accruals & third-party logistics invoices.

    Financial Planning & Analysis (FP&A): used to forecast and analyze supply chain costs to support budgeting and strategic planning I am familiar with gross margin analysis, contribution margin by SKU or channel and Scenario modeling.

Scenarios & Examples

  • One of the key benefits of my tool is an algorithm that determines what and how much to order based on specific inputs. When we added perishable selection to the Store Order Tool (SOT), we hypothesized that this enhancement would reduce shrink caused by over-ordering. To validate this, I conducted an analysis by querying shrink transaction data for all stores from the prior year using SQL.

    The analysis revealed the following insights:

    1. Total Shrink Impact: Shrink across all sub-teams accounted for 5.5% of total revenue, equating to $1.1B annually, of which $778M is categorized as known shrink. Notably, 77% ($600M) of this known shrink originated from perimeter store selection, which was not yet utilizing SOT.

    2. Key Shrink Causes: Of the $600M perimeter shrink, $400M was attributed to spoilage-related reasons: spoilage out-of-date, donation out-of-date, spoilage damage, and spoilage quality. This indicated a strong opportunity for technology solutions to reduce shrink caused by ordering inefficiencies.

    3. Meat and Seafood Focus: Meat shrink totaled $108M, while Seafood shrink reached $62M. Even a 1% reduction in shrink for these two sub-teams would result in $1.2M in cost avoidance or savings.

    This at the time helped justify the value of the features we intended to build and was further justified after the rollout of the technology. In 2024, we launched SOT to the Meat and Seafood sub-teams. While we are still collecting data, early results show a 5% decrease in shrink. If this trend continues, it would translate to an annual savings of $6M from reduced overbuying.

  • I led the development of Inbound Supply Chain features for Automated Buying and Receiving to support the Micro Fulfillment Center’s SVP goal. During the BRD review, stakeholders raised concerns about potential risks of over-ordering relative to available space, product temperature requirements, and FDA layer criteria. These risks necessitated the creation of a separate BRD and design process. After evaluating the implementation timeline, it became clear that the long-term capacity solutions would not be delivered until after the January 2025 ramp-up.

    In addition to the timeline risk, there were uncertainties around post-code-completion testing time, whether the solution would perform as intended, and the fact that implementing systematic capacity guardrails within WFM processes was a first. To address these risks, I developed requirements for an interim, scrappy capacity guardrails solution. This contingency plan enabled the business to request capacity truncations at the ASIN level through SIM tickets submitted to engineering.

    The configuration allowed the business to analyze:

    1. Space allocation across Amazon Fresh, Core, and WFM

    2. Prioritization of items critical to the program

    3. Whether prioritized items would fit within the available ASRS space, ultimately calculating a maximum reorder quantity per ASIN

    While the interim solution didn’t support all items like the long-term solution would, it enabled inventory replenishment to meet customer demand and supported the inventory ramp into the Automated Storage and Retrieval System (ASRS). My proactive approach—including the development of this contingency plan—helped mitigate risks from the delayed long-term solution and ensured the successful pilot and ramp-up of the new automated equipment. This foresight kept the project on track to meet critical milestones.

  • Yes—2024 was a year that required a lot of adaptability. I was responsible for driving communication and alignment across multiple stakeholders for new ordering features, and to support that, I delivered 16 detailed written program updates throughout the year. These updates became a go-to resource across Center Store, Perishables, Culinary, In-Stock, and Retail Ops. I also held bi-weekly meetings to review updates, gather feedback, and work through project trade-offs collaboratively.

    The year presented several challenges: shifting priorities, reduced development capacity due to attrition, and scope increases that delayed timelines. One of the biggest pivots came when we unexpectedly lost six SDEs to attrition—just as the Micro Fulfillment Center (MFC) program began consuming more engineering capacity. This created a major delivery risk.

    In response, I partnered closely with stakeholders to re-evaluate and realign priorities. I considered available headcount, assessed business value across competing initiatives, and modeled capacity plans to determine what could realistically be delivered. Based on those insights, I made trade-off recommendations and facilitated alignment on a revised roadmap that focused on the most critical goals.

    This pivot helped minimize churn for the dev team, kept stakeholders informed and engaged, and allowed us to deliver as much value as possible despite the constraints. It also strengthened trust across the org and reinforced the importance of flexibility and clear communication in driving outcomes.

  • Background: In 2024, Amazon World Wide Grocery Stores Tech (WWGST) launched Amazon’s first automated grocery fulfillment operation know as a Micro Fulfillment Center (MFC) to reduce overall fulfillment costs. This Automated Storage and Retrieval System (ASRS) is designed to store and fulfill online orders for eligible Whole Foods Market (WFM) and Amazon Fresh items. The system is located in the back of a brick-and-mortar WFM store, enabling a goods-to-person fulfillment model where robotic devices bring items to associates, eliminating the need for manual picking and significantly reducing process walk time. The S-Team goal (highest goal type at Amazon) of this initiative was to improve variable cost per unit (VCPU) by ~3000 basis points, from $0.87 to $0.52.

    This launch impacted multiple stakeholder groups, including customer experience (via Amazon and WFM e-commerce platforms), grocery automation (managing 3P robotic equipment integration), and the suite of ERP-like microservices under my ownership. These microservices span key supply chain domains such as receiving, inventory management, forecasting, reorder quantity calculations, and procure-to-pay (P2P) processes.

    Problem: The Automated Storage and Retrieval System (ASRS) operates differently from traditional warehouse processes. In a traditional system, team members receive products, stock items, pick online orders, and manage replenishment manually. The ASRS, however, uses robots to manage these tasks automatically. After team members induct items into the system, robots handle stowing, picking, and replenishment operations before transferring products to team members for final packaging. Due to this automated process and unique storage method, we developed an automated replenishment system that triggers orders based on system data without human intervention.

    Solution: Ryan defined requirements spanning over 15 systems and led the program across five engineering teams to design, implement, test, and launch the software in December 2024.  The components of the requirements are summarized below:

    Eligibility Service

    To identify items for automation, we built a user interface powered by DynamoDB and APIs to verify eligibility. Users log into the web UI to designate items for automated purchasing in ordering systems.

    Ordering Inputs & ROQ Calculation

    We implemented a mechanism to upload unique MFC-specific Target Inventory Positions and vendor schedules, accounting for distinct ordering days, delivery days, and vendor lead times. For eligible items, we ran a separate channel specific Re-Order Quantity calculation via backend Lambdas, using only online demand.

    Capacity Guardrails

    We built a direct integration with the 3P equipment to retrieve real-time capacity data, allocate capacity between Whole Foods and Fresh, and apply logic to truncate items after ROQ generation if they exceeded the machine's capacity.

    PO Identifiers for Suppliers, Inventory, and Accounting Purposes

    We created configurable identifiers that enabled MFC items to be placed on separate POs, included account # identifiers for suppliers to ingest via EDI-850 messages and treat this product differently in their fulfillment system to ship products on separate pallets and trucks, and ensured purchase orders were mapped to the correct cost accounts for P&L and balance sheet reporting. These identifiers were propagated through all downstream systems for receipt processing and integrated into existing reporting.

     Receiving Workflows

    We introduced new receiving messages within the Store Receive Tool a mobile application on a Honeywell CT60 scanner to clearly indicate whether MFC products were part of a 3P or DC shipment, enabling team members to route them to back-of-house equipment instead of the sales floor.

    Perpetual Inventory

    Upon receipt, we established processes to track inventory events at key stages, including receiving, induction into equipment, sales, and spoilage for real time inventory tracking. Inventory was assigned to a distinct virtual bin to indicate whether it was designated for or already inside the equipment.

    Results: We successfully deployed the equipment and automated supply chain components, enabling a select group of customers to order Fresh and Whole Foods products in a single checkout, all fulfilled via robotic automation. The rollout continues in 2025 as we scale to over 21,000 items being stored and automatically ordered through the system. Once fully implemented, the ASRS equipment will support $10M in online revenue while reducing fulfillment costs by 40%, generating $3.5M in annual savings.

  • Technical Challenges

    The hardest technical part of launching the program was the capacity guardrails process. WFM buying systems did not have a concept of capacity incorporated in re-ordering calculations. For MFC, we defined requirements for controls for WFM and Amazon automated ordering systems to constrain ordering to what the automation hardware can physically hold, known as Capacity Guardrails. Capacity can be seen as three separate components 1) A component to provide visibility to what space is currently available within the machine 2) A component to split the machine capacity between WFM, Fresh, and Core space 3) a way to prioritize the order in which the item and its quantity is applied to available capacity and truncation of items that will not fit in the machine.

    Component 1

    The machine was organized in lanes of trays and each lane had three different dimensions.  The size of the lane width which varied in different 13 different sizes to fit all types of products, the temperature rating they could be stored in the lane ambient, chilled, or frozen product, and the FDA classification which was five separate categories of type of product such as raw meat or ready to eat.  We decided to name the combination of the 3 dimensions “bucket type”.  This resulted in over 200 bucket types.

    Component 2

    Once we knew the bucket type combinations and the quantity of lanes available per bucket type, we had to allocate the total buckets across Whole Foods, Fresh, and Core Product.  We decided to let the business decide two inputs which total our systems how to divvy this up.  One input was capacity %, this had to total to 100% and spread out across the three business units so that we wouldn’t consume into each other’s capacity to allow for the proper assortment of product.  Utilization factor was applied after the split and used to add a buffer if input from the machine we a little off to reduce the risk in over-ordering.  One key complexity in why we had to approach it this way is because ordering tools and backend services were different for Whole Foods and Fresh, in other words different tech stacks.  If we didn’t have different tech stacks we wouldn’t have had to split the capacity across business units, because we would be using one system. However, this was a technical limitation, so both systems had to know the capacity of the machine and use that in separate calculations and logic

    Component 3

    The final component of capacity guardrails involved using Whole Foods’ Lane allocation by bucket type to incrementally process items and their order quantities. Each item was assigned to a specific bucket type, with a maximum unit capacity per bucket. Additionally, we applied an item importance factor to prioritize higher-importance items before processing lower-priority ones.

    As items were processed and buckets filled, any items that exceeded capacity were truncated and logged in a DynamoDB database to track unmet demand. Items that fit within the system were processed into a purchase order and transmitted to the supplier via EDI-850. The supplier then fulfilled the order, loaded the product onto a separate truck, and receiving tools facilitated the three-way match between the purchase order, receipt, and E-invoice. Upon receipt, the product was added to a back-of-house virtual bin, ensuring accurate tracking of equipment-specific inventory. Product was then brought to induct to be loaded in the equipment

    Timeline Challenges
    The Grocery Supply Chain (GSC) tech team was engaged late to support a 2024 launch. I wrote the Business Requirements Document (BRD) on April 19, just two weeks after being assigned the program. Our workback plan set a tech-ready date of October 25, allowing only five months for development and one month for testing and UAT, all while managing several other high-priority, in-flight features critical to the business. Given the number of systems involved and our existing headcount, I quickly recognized that we had two options: reduce scope and deprioritize certain requirements or request additional resources. Since resource allocation is often a lengthy process, I opted to proceed by cutting scope.

     

    At a high level, the project required us to automatically generate reorder quantities, process purchase orders with unique identifiers, ensure proper receipt of products into the correct virtual bin, enable system-wide awareness of the MFC concept for transactional processing, and ensure no disruption to retail items or front-of-house store processes. To meet the October 25 deadline, we descoped the following:

    • Capacity guardrails were postponed until ramp-up in January, as the gradual rollout allowed flexibility.

    • Automation for Produce items was deprioritized since these items comprised only 15% of total machine-serviced volume.

    • Low-volume edge cases (e.g., transformed items, DC pre-order items) were deprioritized, as they accounted for only 5% of total online retail items.

    These changes were presented to key business stakeholders through a tradeoff document, which included VPs from Retail Operations, DC Operations, In-Stock, and E-Commerce—totaling approximately 50 stakeholders across five VPs. The document evaluated all potential options, ultimately recommending the descoping of the outlined requirements while treating them as fast follows. It also detailed other work that would need to be put on hold until the MFC implementation was completed. The business aligned with our recommendations and agreed to prioritize MFC over other planned 2024 roadmap initiatives. As a result, we met our tech-ready date of October 25, delivered capacity guardrails by the end of January, and are now working through the backlog for 2025.

     

    Other challenges within this project were making all our supply chain systems aware of if the item was to go into the machine, input quality from catalog data authorities such as item vendor-mapping, and working though all the edge cases that could occurs.  If processes are fully automated, inputs need to be accurate so the right item, gets order from the right vendor, with the right quantity, as there is no human intervention.

  • Background: Supply shortages, manufacturing delays, and logistics challenges frequently cause product stockouts at distributors and Whole Foods Distribution Centers (DCs). When products are unavailable, substituting similar items helps meet customer needs. For example, while customers prefer fresh seafood, they often accept frozen alternatives when fresh options are unavailable.  In order to be able to substitute, a team member needs to know an item is out of stock and have awareness of an alternative that is in stock to service the demand of the customer.

    Problem: The Store Ordering Tool (SOT) currently lacks visibility into out-of-stock items from both Whole Foods Market Distribution Centers (WFM DCs) and third-party vendors. When SOT creates a purchase order (PO) for WFM DC items, it routes through Direct Vendor Ordering (DVO) web for approval. While team members can modify orders based on DC communications about stockouts, DVO's system limitations prevent automatic notification of out-of-stock items or suggested substitutions. The current workaround of checking Order Link is inefficient and often leads to duplicate work.

     

    DC stockout visibility and substitution capabilities are essential for scaling Variable Weight ordering, particularly given our current sourcing patterns from DCs: 72% of Meat, 17% of Seafood, 13% of Cheese, 11% of Accoutrements, and 10% of Commodity Cheese. Without these features, the system generates inaccurate Suggested Order Quantities (SOQs) for items requiring substitution, creating a false sense of order coverage. In reality, these orders often fail fulfillment due to upstream DC stockouts.

     

    These system limitations trigger a cascade of operational challenges. The bullwhip effect disrupts the supply chain, while order writers face delays in proactively managing inventory positions to meet demand. The impact extends beyond operational inefficiencies - inaccurate SOQs increase the probability of stockouts, directly affecting sales revenue. Additionally, the system's limitations can lead to overordering as team members attempt to compensate for uncertainties, resulting in higher shrink rates. This is particularly problematic in perishable departments like meat and seafood, where products have short shelf lives and waste directly impacts profitability.

    Solution: Ryan authored a product requirements document detailing a structured approach to addressing this problem, breaking it down into seven phases:

    1. Ingest DC Out of Stock Data – Notify the order writer when a product is unavailable at the DC, allowing them to manually select a substitute.

    2. Recommend Substitute Products – Provide constrained substitution options when an item is unavailable at the DC, while still allowing the order writer to make the final selection.

    3. Optimize Reorder Quantity (ROQ) – Ensure accurate ROQ generation for substitute items.

    4. Large-Scale Supplier Integration – Develop a mechanism for large suppliers to upload inventory levels and integrate this data into SOT ordering processes.

    5. Local Supplier Integration – Establish a similar mechanism for smaller, local suppliers to upload inventory data and incorporate it into SOT ordering.

    6. Substitution Behavior Learning – Analyze and learn substitution patterns for DC items to improve recommendations.

    7. Advanced Substitution Optimization – Extend substitution logic to optimize for multiple variables, including in-stock rates, product cost, lead time, and selecting better-selling product varieties to maximize revenue.

    Execution: In 2024, Ryan and his engineering team delivered three critical features to enhance the Store Order Tool's functionality. First, they implemented persistent inventory tracking that connects directly to the DC Warehouse Management System. Second, they developed data transformation capabilities to ensure compatibility between warehouse data and Store Order Tool requirements. Third, they created a new workflow enabling team members to flag out-of-stock items and select appropriate manual substitutions.

    Results: The Solution launched in 2024 and scaled to the network in 2025.  The feedback from the team members were overwhelming positive and it’s anticipated that these features will impact in stock by 25 bps, estimated to deliver a $5M entitlement by the end of 2025.  The team has a backlog of items outline in the requirements for future incremental delivery.

  • Background: Amazon's retail fulfillment for customer orders has traditionally relied on a combination of third-party carriers, Amazon-owned assets (trucks and trailers with contracted drivers), and Amazon-owned assets with Amazon-employed drivers for LTL (Less Than Truckload) shipments. For small package deliveries, Amazon coordinates with various carriers including USPS, UPS, DHL, Amazon Logistics, and others. Due to high volumes, Amazon enjoys some of the best rates in the industry for these shipment modes.

     

    However, for indirect procured assets or consumable expense items - such as lockers, furniture, signage, equipment, computer hardware, materials, Maintenance, Repair, and Operations (MRO) supplies, and other indirect products required to launch new buildings and sustain operations across the Amazon network - these goods are sourced through vendors who bill Amazon for the transportation costs. Annually, indirect transportation costs across Amazon's indirect supply base amount to $1.5 billion.

     

    By leveraging Amazon's negotiated rates for indirectly procured items and their associated transportation, costs could be reduced by 51% for small parcel shipments, 15% for Less Than Truckload (LTL) shipments, and 15% for Full Truck Load (FTL) shipments. The estimated cost savings (entitlement) would be $100M-300M over a one-year period.

    Problem: The program focused on reduction in freight costs however aimed to provide Amazon the visibility of freight shipment, enable controllership for freight payment, and improve operational readiness in a systematic and automated manner.

    Solution: Ryan initially supported a make-or-buy decision analysis to either develop an in-house system or purchase a commodity product. However, due to other high-visibility and more valuable transportation automation projects, this initiative did not secure a place in the roadmaps of other middle mile technology teams. Consequently, the team decided to proceed with evaluating third-party SaaS Transportation Management Providers.

     

    Ryan orchestrated a comprehensive RFP process across 11 SaaS providers, including Blue Yonder, Mercury Gate, Infor, Kuebix, and FreightPOP. These vendors were evaluated using a weighted scoring system that assessed user experience, product capability, technical specifications, and pricing. Following this evaluation, Blue Yonder emerged as the successful bidder and was awarded a $6M 3 year contract.

     

    The team sought a SaaS provider that could deliver several critical features: a supplier-facing user interface for self-service pickup and delivery management, seamless integration capabilities with Amazon's indirect procure-to-pay software (Coupa), UPS API integration functionality, EDI capabilities for handling tenders (204), tracking signals (214), and invoicing (210). Additionally, the solution needed to provide an internal user interface for managing rate cards and carrier profiles, along with shipment optimization functionality based on cost effectiveness. These requirements were essential to streamline and automate the transportation management process while ensuring cost optimization across the network.

    Execution: Ryan led the product requirements, program, and technical delivery of an TMS integration.  The integration team on the Amazon side was made up of five SDES and on the Blue Yonder side made up of a team of five including technical account managers, architects, and developers.

    The SFI product is made up of six main systems:

    Coupa – the indirect purchasing system in which POs are passed into the TMS

    Freight Order Manager (TMS) – the supplier facing user interface (UI) which the supplier uses to enter in shipment inputs such as weight, freight, class, origin/destination, etc. to get shipments assigned to carriers, print shipping documentation such as the bill of lading, packing slip, and label, and track shipments after pickup

    Transportation Manager (TMS) - the Amazon Facing UI responsible for rating, routing, planning, reporting, EDI transactions, and system configuration.  The brains of the TMS.

    Controllership and payment systems – consisting of the EDI console to ingest invoices, TIPS which ingests and audits invoicing data to ensure Amazon is paying per contracted prices, Payee central/OFA for release of payment to the carrier after approval.

    Accounting systems – Transportation Financial systems which will allocate transportation costs back to the owning cost center and assign transportation cost back to the asset allowing accounting the ability to capitalize transportation costs back into the value of the asset.

    The SFI integration layer – the layer between all of the systems detailed below.  It is responsible for ingesting and passing PO data from Coupa to the TMS, updating shipping statuses in Coupa, providing manifest data in the required format to downstream systems, and handling consolidated reporting from all data sources.

    The team worked to build the integration, improve the supplier facing user interface to meet Amazon’s needs, and get suppliers and carriers onboarded to the program. Ryan coordinated six supplier shippers, five domestic 3P carriers (UPS, T-Force, Yellow Freight, Roadrunner, and YRC), multiple internal cross-functional teams, and Blue Yonder's engineering architects. He oversaw the entire shipment lifecycle, from creation to internal accounting automation, ensuring operational efficiency and compliance. Ryan played a pivotal role in the Coupa to TMS integration, designing systematic controls to regulate data flows, implementing purchase order (PO) trigger logic, and ensuring that only relevant transactions were processed. By optimizing transactional processes, he successfully reduced shipment creation time for suppliers by 44% for Small Parcel (SP) and 28% for Less-Than-Truckload (LTL), driving a 77% increase in throughput.

     

    To enhance user experience, Ryan introduced over 100 UI enhancements, including custom page configurations, advanced search filters, real-time alerts, bulk upload APIs, and reporting features. He also led the TMS to Coupa integration, enabling automated Advanced Shipment Notifications (ASN) to improve shipment tracking for the PO requestor. His strategic leadership and technical expertise ensured the successful deployment of the SFI system, driving cost savings, efficiency, and scalability across Amazon’s inbound transportation network.

    Results: Within the first two months of launch, Ryan's leadership over the execution this VP level goal, resulted in $150K in savings, supporting deliveries to 200 destination sites across 29 different Amazon building types in 39 U.S. states, while achieving an impressive 99% delivery success rate.  During this process, Ryan built out a backlog of additional features and prioritized these features to scale the solution to more indirect suppliers

  • Background: Whole Foods Market (WFM) is implementing a Fresh Food Production (FFP) strategy to centralize food manufacturing, allowing in-store prepared foods (PFDS) to transition into finishing kitchens that require less space, equipment, and labor. This strategy includes two facility types: Micro-Kitchens (MKs), which produce a limited PFDS selection for fewer than five stores, and Metro-Facilities (MFs), which produce the full PFDS selection for an entire region.

    Problem: To enable these facilities to produce food at scale, they needed a way to procure ingredient items directly from Whole Foods Market suppliers. However, the manufacturing facilities used Sage X3, a third-party Enterprise Resource Planning (ERP) system, to manage production, inventory, and supplier purchase orders, but there was no integration between the ERP and WFM systems. This gap created inefficiencies in procurement and supply chain operations.

    Solution: To solve this, Ryan led the product requirements and technical delivery of an ERP integration, enabling manufacturing facilities to place orders directly with WFM Distribution Centers (DCs). The solution involved several key components. First, an API Gateway was implemented to manage authentication and access. Next, a Coral Lambda Endpoint was developed to facilitate RESTful API functions using CRUD operations, allowing the ERP to create, read, update, and delete purchase orders. A Processor Lambda was then built to enrich data by incorporating catalog and vendor foundational details before writing purchase orders to two Oracle databases—one for third-party orders and another for the WFM DC system. Additionally, a Simple Queue Service (SQS) was introduced to ensure FIFO (First In, First Out) processing, provide redundancy, and support high-throughput order volumes.

    Execution: One of the major challenges in execution was foundational data inconsistencies. The ERP system relied on a weekly catalog file, meaning that any product code or department changes could cause API failures for certain purchase orders. To mitigate this, a Dead Letter Queue (DLQ) was implemented to log and redrive failed requests, and external tables were created to refresh ERP catalog inputs more frequently. A long-term solution was also proposed to externalize a service for real-time catalog updates.

    Results: In 2024, Whole Foods Market successfully launched the first-ever Fresh Food Production proof of concept, and plans are in place to scale in 2025 to serve more Whole Foods locations. This integration is expected to reduce labor costs, improve operational efficiency, and increase operating profit by 5%, marking a significant step forward in modernizing WFM’s food production and supply chain capabilities.

Technical

General Questions & Mental Models

  • Agile delivery frameworks have been the foundation of my work in technology, particularly at Amazon, where structured processes guide product development. Amazon's approach begins with long-term planning (3YR Plan, OP1, OP2), followed by Vision/Business Case (PRFAQ), Business Requirements Documents (BRD), High-Level Design (HLD), Low-Level Design (LLD), and continues through Implementation, QA Testing, UAT, Pilot, Rollout, and Iteration. Our teams operate on two-week sprints, with product managers ensuring EPICs, stories, and requirements are clear before sprint planning. SDMs and SDEs break down technical tasks and estimate effort levels during Sprint Grooming Meetings.

     

    We use internal tools similar to JIRA, like Simple Issue Management, to assign points, track progress through a virtual Kanban, and prioritize tasks. Before each sprint, product and engineering teams meet to determine the next focus areas. Trade-off discussions are informed by customer priorities, timelines, ongoing work, delivery dates, and available resources. I typically approach these meetings with a clear perspective on what needs to be done and why. Once aligned on the "what," we finalize the "how" and assign tasks based on the team’s velocity and confidence in estimates.

     

    Weekly standups track progress, with flexibility to adjust goals if deliverables slip, scope changes, or customer priorities shift. If adjustments cannot be made within the sprint, they are revisited during the next sprint planning session. Grooming meetings occur as needed, sprint planning meetings are bi-weekly, and standups are held once or twice per week, depending on the SDM’s preference.

     

    Additionally, bi-weekly roadmap meetings with engineering and product teams, as well as separate meetings with business stakeholders, provide visibility into progress across all stages—from BRD to rollout. Customer feedback plays a critical role in prioritizing and iterating. Bugs are addressed immediately using on-call resources, while new features or enhancements are incorporated into sprint planning. Demos occur at the end of sprints or key milestones, allowing us to showcase progress, gather feedback, and iterate further. This iterative, feedback-driven approach ensures alignment with customer needs and business goals.

  • I use SQL regularly, at least once a week, to query Amazon’s data warehouse, known as Andes, via an internal web interface called Hubble, which connects directly to these databases. Additionally, I work with the Whole Foods data warehouse using tools like DBeaver or MySQL, which provide graphical user interfaces for database management. I would describe my SQL knowledge as intermediate, and I am proficient at writing queries once I understand the table schemas and the relationships between tables.

    One of the more complex queries I’ve written involved joining 11 different tables and spanned over 100 lines. This query was designed to identify all inventory transactions, along with item attributes from multiple tables, by leveraging item identifier information. I used it to validate transactions for a recent feature I launched.  I have also set up jobs, to run queries on a cadence, to pipe that data into AWS Insights to have a daily feed of KPIs for my product and associated feature launches.

    I am skilled in performing tasks such as joining tables, using aggregate functions like COUNT, SUM, AVG, MAX, and MIN, grouping data with GROUP BY, filtering and sorting using WHERE, AND, OR, IN, BETWEEN, ORDER BY, and LIMIT, and creating temporary tables to simplify complex queries. SQL has been a critical tool for me in conducting data analysis, validating features, and driving data-informed decisions in product management.

  • When I first joined the Whole Foods side of Amazon, our team inherited procurement systems with minimal documentation or insight into their functionality. Amazon had acquired Whole Foods approximately six years earlier and was in the process of transitioning from legacy WFM technology to Amazon's tech stack.

     

    To address this, we conducted a comprehensive analysis of the system architecture. Collaborating with my Senior Engineer, we used Design Inspector (a tool similar to Lucidchart or Draw.io) to map out the system. We began by examining the front end, identifying user interfaces and the various environments (prod, gamma, beta), along with their associated AWS accounts. Next, we delved into the codebase to identify key components, including services like AWS Lambda and DynamoDB, API calls, integration points, databases, and ETL pipelines used for reporting. This allowed us to construct both high-level and low-level architectural diagrams.

    We also identified dependent services not owned by our team but essential for the functionality of our algorithms and systems. These included catalog data, inventory data, purchase order data, and forecasting data.

     

    The outcome of this exercise was a holistic understanding of what our team produces and consumes, the integration points and ownership of external services, and a detailed flow of data, including the timing of jobs and API calls. This knowledge became invaluable for diagnosing on-call issues, planning future architectural updates, and supporting the broader Amazon Grocery initiative.

    In parallel, we track input metrics such as adoption rates, reason codes, and trouble tickets submitted for issues. These feed into output metrics like cost savings (labor efficiency or shrink reduction) and revenue enablement (improvements in in-stock levels).

    For example, when we launched the DC Out-of-Stock and Substitution features, which made the Store Ordering Tool out-of-stock aware and enabled a replacement workflow, we collected both qualitative and quantitative data.

    • Qualitative Feedback: Responses were mixed. Some stores praised the new workflow, providing strong testimonials, while others flagged accuracy issues with out-of-stock data due to catalog input discrepancies at specific DCs and stores.

    • Input Metrics: Adoption was ~95% in areas without catalog issues, while in areas with data discrepancies, adoption dropped to ~50%.

    Both datasets were instrumental in identifying and root-causing the catalog issues, enabling us to correct the inputs and improve accuracy.

    While I view both qualitative and quantitative data as essential, I prioritize building robust qualitative feedback mechanisms before a full launch to identify potential issues early. I also favor running pilots with a limited set of stores for a sufficient period to minimize risk, reduce the blast radius, and enable incremental delivery.

  • As a PMT, I dedicate approximately 40% of my time to crafting business and technical artifacts. These include:

    1. 3-Year Plans (3YR): These documents outline the current state, identify gaps, and provide high-level recommendations needed to achieve a desired future state.

    2. Press Release Frequently Asked Questions (PRFAQs): This document presents a vision for the product, written as if the company were announcing it publicly. It outlines the problem, the target market, the business value, and why it’s a strategic investment for the company. The PRFAQ serves as a guiding document to align stakeholders on the "what" and the “why” behind desired future state for a set of high level features.

    3. Operational Planning (OP1/OP2): This annual cycle focuses on defining the vision for the upcoming year and prioritizing key initiatives ("big rocks"). The process involves technical scoping and T-shirt sizing to estimate effort based on assumptions. The output is a detailed document reviewed by business and tech leaders to allocate resources. This document establishes what will and will not be built in the year and the rationale behind these decisions.

    4. Business Requirement Documents (BRDs): Individual features are broken down into BRDs, which include the purpose, background, goals, entitlement assumptions, current state, future state, EPICs, user stories, risks, dependencies, next steps, and FAQs. These are reviewed by both business and tech stakeholders before moving forward.

    The development lifecycle proceeds as follows:

    • Design: Engineers own the High-Level Design (HLD) and Low-Level Design (LLD) documents, but these are reviewed with product to ensure alignment with requirements.

    • Requirements Traceability Matrix (RTM): Before QA testing, PMTs collaborate with QA Engineers to create an RTM that links requirements to SIM tickets, code changes, and validation criteria.

    • QA Testing: Testing is conducted in lower environments using the RTM to validate functionality against requirements.

    • User Acceptance Testing (UAT): Post-QA, the product team conducts UAT, ensuring all bugs are addressed and functionality aligns with the original requirements and RTM.

    • User Feedback: A demo is presented to users for feedback. Users are invited to test in gamma environments or sandboxes, and their input informs adjustments.

    • Pilot Phase: Features are piloted with a broader audience to gather additional feedback, address gaps, and incorporate scope changes into future sprints.

    • Incremental Rollout: After the pilot phase, the feature is gradually rolled out to a wider audience to ensure stability and adoption.

    Throughout this process, clear communication with stakeholders is maintained. If there are gaps, trade-offs, or timeline adjustments, we provide detailed recommendations, ensuring alignment with stakeholder expectations. This rigorous approach ensures the delivered product meets requirements and addresses customer needs effectively.

  • Assessment & Strategy
    The first phase focuses on understanding the current ERP system, defining the desired future state, and building a strategic roadmap. This starts by inventorying existing ERP capabilities—identifying all core business processes, modules, dependencies, data flows, and integrations. From there, it’s important to segment the monolith by business domain, using bounded contexts like Finance, HR, Procurement, and Inventory to guide the decomposition. A thorough assessment of technical debt and risks should follow, highlighting brittle workflows, redundant systems, or high-effort areas that pose challenges. With this foundation, teams can define a north star architecture—typically a vision centered on cloud-native microservices aligned to the business domains. Finally, components should be prioritized based on a combination of business value, technical feasibility, ROI, change readiness, and criticality to the organization.

    Enable the Foundation
    This phase is about laying the groundwork for modernization while keeping the legacy ERP system operational. It begins with introducing an API gateway and service mesh to support routing, observability, and interoperability across systems. Setting up shared services is also key—this includes identity and access management (IAM), a messaging/event bus such as Kafka or SNS/SQS, logging and monitoring tools, and scalable cloud infrastructure. A well-defined data replication strategy is essential at this stage. Depending on complexity, teams may implement Change Data Capture (CDC), dual writes, or event sourcing to ensure consistency between legacy and new systems.

    Extract and Modernize
    Here, the organization begins incrementally replacing legacy functionality with microservices, starting with lower-risk, non-critical domains such as reporting, analytics, or document generation. Once those are stabilized, the focus shifts to operational modules like Inventory, Procurement, and Order Management. High-risk modules—such as Financials or HRIS—are typically delayed unless there’s an opportunity to align with an external vendor transition. The modernization of each domain follows a repeatable high-level process: extract the relevant data and models, rebuild the business logic as a microservice, replace or integrate with the existing UI layer if needed, redirect ERP integrations to the new service, and finally, shut off the old module once the transition is successful.

    ERP Decommissioning
    The final phase is about fully retiring legacy ERP components after a successful migration and stabilization period. This involves sunsetting modules one by one, ensuring full feature and data parity with the newly built systems. Final data archival is conducted to meet regulatory and audit requirements, retaining read-only access as needed. To wrap up, documentation, training materials, and support processes must be updated to reflect the new system landscape, ensuring business continuity and user readiness.

    Data Strategy Throughout

    •       Data federation and synchronization must be tightly controlled.

    •       Event-driven sync between old and new systems

    •       Shadow reads/writes for validation

    •       Master data management (MDM) to ensure consistency across services

    Security, Compliance, and Governance

        •    Ensure each new service complies with SOX, GDPR, or industry standards.

        •    Implement access controls, audit trails, and centralized policy management from day one.

Scenarios & Examples

  • My scope includes user-facing web dashboards that allows In-Stock team to configure Target Inventory Position, manage Supplier Schedules, override forecasting outputs, and handle exceptions used in store ordering algorithms. In addition to web tools, I manage multiple mobile Android applications that enabled store teams to cycle count inventory, receive products, record and financially evaluate shrink, and determine the right Reorder Quantities (ROQ) in a Just in Time (JIT) fashion. These applications support purchasing for both first-party (1P) and third-party (3P) suppliers via EDI integration, ensuring that the right products are delivered to the right stores, in the right quantity, at the right time. 

    One of the core products I own is a mobile android application know as that Store Ordering Tool (SOT), that utilizes inventory, forecast, target inventory position, and supplier schedule to provide a recommendation for a team member to either accept or adjust to keep product in stock.  Some inputs are business provided via web dashboards.  Other inputs such as forecast are system generated via machine learning forecasting models.  And other portions are algorithms to compute how much product to order.  All of these backend services use APIs to communicate with each other and ultimately produce a Recommend order Quantity.

    Example APIs:

    • GetSubteamAllowListByStoreAndFeatureId – Read (Get): Returns an allow list (whitelist) for departments that should show within the tool.

    • SetInventory – Update (Put): Updates the current inventory balance so a re-order quantity can be recalculated if system and physical inventory differ.

    • SnoozeItem – Create (Post): Writes item detail to a database to prevent it from surfacing in future ordering days.

    • GetVendorDetails – Read (Get): Retrieves vendor details for a given item.

    • GetProductInfo – Read (Get): Fetches item identifier, description, and other catalog details.

    • SOIRemoveItem – Delete (Delete): Removes a reorder quantity (ROQ) added to the cart that no longer needs to be ordered.

    • SOICancelOrder – Delete (Delete): Removes all items from the cart and leaves them on a list for future action.

    • SOISubmitOrder – Create (Post): Writes the order to a database.

    • GetDetailsForOrdering – Read (Get): Retrieves multiple data points for Re-Order Quantity, Purchase Order History, Sales Order History, Substitution History, and more.

    Overtime, we have worked to build out new APIs on these services to handle specific use cases.  For example, one use case required retrieving a vendor schedule input at a lower granularity so we stood up new APIs to receive a vendor schedule at a Store-Vendor-Item level vs just a store-vendor level.

  • I start by gathering business and customer requirements, ensuring alignment with our digital process strategy.  I work closely with stakeholders to define API use cases, security needs, and data structures.  I prioritize designing restful APIs with clear versioning, standard authentication, and comprehensive document to ensure ease of integration.  We typically have standards at Amazon covering error handling, rate limiting, and conduct unit/integration/load testing.  Typically, we have a gradual rollout using feature flags, A/B testing, and phased release.  For API monitoring we utilize AWS cloud watch and set up dashboards, monitor and alarms to ensure we are catching issues in rollout.  When we are transitioning to a new API, we typically support versioning strategies so that we can have a window of 3-6 months, for customers to have time to onboard to the new API.

     

    By following this structured approach, I ensure APIs remain secure, scalable, and aligned with business goals.

     

    Fresh Food Production (FFP) Sage X3 to Orderlink Integration

    Whole Foods Market (WFM) is implementing a Fresh Food Production (FFP) strategy to centralize food manufacturing, allowing in-store prepared foods (PFDS) to transition into finishing kitchens that require less space, equipment, and labor. This strategy includes two facility types: Micro-Kitchens (MKs), which produce a limited PFDS selection for fewer than five stores, and Metro-Facilities (MFs), which produce the full PFDS selection for an entire region.

     

    To enable these facilities to produce food at scale, they needed a way to procure ingredient items directly from Whole Foods Market suppliers. However, the manufacturing facilities used Sage X3, a third-party Enterprise Resource Planning (ERP) system, to manage production, inventory, and supplier purchase orders, but there was no integration between the ERP and WFM systems. This gap created inefficiencies in procurement and supply chain operations.

     

    To solve this, I led the product requirements and technical delivery of an ERP integration, enabling manufacturing facilities to place orders directly with WFM Distribution Centers (DCs). The solution involved several key components. First, an API Gateway was implemented to manage authentication and access. Next, a Coral Lambda Endpoint was developed to facilitate RESTful API functions using CRUD operations, allowing the ERP to create, read, update, and delete purchase orders. A Processor Lambda was then built to enrich data by incorporating catalog and vendor foundational details before writing purchase orders to two Oracle databases—one for third-party orders and another for the WFM DC system. Additionally, a Simple Queue Service (SQS) was introduced to ensure FIFO (First In, First Out) processing, provide redundancy, and support high-throughput order volumes.

  • In my current role, I owned a portion of the strategic transition of a legacy ERP system to a modern cloud-based microservices architecture. We began by conducting a thorough assessment of the existing monolith, mapping all business processes, data flows, and technical dependencies. From there, we segmented the ERP into bounded business domains — such as finance, procurement, and inventory — and prioritized them based on business value and technical feasibility. We adopted a phased ‘strangler’ approach, starting with less critical modules to build confidence and refine our migration pattern. As we modernized each domain, we built APIs, introduced event-driven data synchronization, and established shared services like authentication and logging to support the new services. Throughout the process, we maintained interoperability with the legacy system to ensure business continuity. Once services were validated and stable, we incrementally retired legacy modules. The result was a more agile, scalable architecture that significantly improved deployment velocity, data access, and system resilience.

    Challenges:

     

    Dual Read/Writes: In order to transition from the legacy application, to the new application, we had to temporarily support dual reads/writes to keep data in sync. To accomplish this, we stood up a new database table to that would write receive transactions from the old tool, until the new tool was being used.  Once the new application was fully rolled out for all receive transactions, we depreciated the old tables and application.

     

    Exceptions: Defining and dealing with exceptions was net new in the P1s.  We had to solve for situations where product was on the PO but not physically shipped, the item quantity was short due to damages/us of stock, and more product was shipped than anticipated.  In order to capture these, we had to allow for features to manage these exceptions.

     

    Legacy data: Some of the data existed in the current state but was not being utilized in receive processes and was out dated.  For example, the legacy systems took the PO as the source of truth when we were receiving 1P DC related orders.  In order for us to know what the DC actually shipped however we required that a new Advanced Shipment Notification (ASN) process to be stood up.  The ASN would true up the incoming quantity prior to receipt, so we could receive the right item and quantity into inventory.

     

    Results: This innovation reduced product receiving time from two hours to one hour, saving 30 minutes per full truckload. Additionally, he supported the launch of features allowing team members to issue credit POs for missing, short, or damaged products, cutting execution time from 20 to 10 minutes per transaction.

Supply Chain & Procurement

  • When I worked in Procurement at Danaher, my team conducted quarterly supplier reviews, evaluating key performance areas such as quality, delivery, cost, payment terms, and innovation. Each metric was weighted, resulting in an overall supplier score. The most important vendors for this were ones in which had the largest volume, deliveries, spend, and provided critical components for the manufacture process.

    This scorecard system allowed us to:

    • Compare suppliers within the same category to benchmark performance.

    • Hold suppliers accountable for meeting our business needs.

    • Encourage continuous improvement by incentivizing better performance.

    Since the materials sourced were used in medical device production, specifically orthodontics, each batch had to meet strict quality standards and undergo tolerance inspections.

    • On-time delivery was measured by comparing expected vs. actual delivery dates.

    • Cost performance was tracked using purchase price variance (PPV), calculated as (cost delta) × (quantity received) over a given period.

    • Payment terms were set at a target of Net 90, allowing Danaher to produce and sell products before paying for raw materials, improving working capital turns. While this was our contractual standard, some smaller suppliers couldn't accommodate Net 90. In these cases, we worked on incrementally extending payment terms to align with our financial strategy.

    This structured approach helped optimize supplier relationships, drive cost savings, and ensure high-quality production standards in a regulated industry.

    Specifically, we tried to move the needle with suppliers to improve each individual metric.  As an example:

    • Addressing Low On-Time Delivery Scores
      To tackle low on-time delivery scores, we partnered closely with suppliers to uncover underlying issues and implement practical solutions. We discovered that many suppliers struggled with demand variability and lacked clear insight into our forecasting and purchase order rhythms. To bridge this gap, we began sharing detailed forecasts on a regular cadence, even down to the raw material level, providing transparency into our expected needs. Additionally, we implemented blanket purchase order agreements that empowered suppliers to better plan their production cycles and manage inventory. This proactive approach not only smoothed fulfillment processes but also helped reduce variability in lead times and improve overall delivery performance.

    • Improving Supplier Quality Scores
      When supplier quality scores lagged, we took a hands-on approach to understand whether the root issues were due to manufacturing flaws, inconsistent materials, or process inefficiencies. For suppliers who were strategic and possessed highly specialized capabilities, we occasionally invested in critical equipment or production assets to upgrade their manufacturing capabilities. These assets remained under our ownership, preserving optionality to insource if needed, while resolving quality concerns. In instances where quality procedures were lacking, we standardized processes that suppliers were required to adopt and conducted facility audits to ensure compliance. Additionally, we revisited outdated product specifications and, where appropriate, relaxed tolerances without affecting performance. This reduced unnecessary product rejections and improved adherence to quality standards.

    • Reducing Costs & Driving Supplier Competition
      To address low cost performance, we collaborated with suppliers to uncover savings opportunities across raw materials, direct materials, and overhead expenses. By qualifying multiple vendors for key materials and services, we introduced healthy competition into the sourcing process. A particularly successful tactic was the use of online reverse auctions through our source-to-contract platform. This dynamic, auction-style approach allowed suppliers to submit real-time, lower bids to secure contracts, fostering a competitive atmosphere that drove down prices. We also explored alternative materials—especially in packaging—identifying cost-effective options that maintained product protection and customer satisfaction. These efforts significantly reduced costs while maintaining quality and strengthening our overall supplier network.

  • Situation: One of our packaging suppliers, who provided die-cut poron for our quad packaging, notified us of a price increase.


    Task: As the owner of that supplier relationship, it was my responsibility to mitigate the cost impact and ideally negotiate a more favorable outcome.


    Action: I first met with the supplier to understand the root cause of the increase. He explained that the cost of his raw material had gone up due to pricing changes from one of his upstream vendors, and that it was out of his control. To validate this and explore alternatives, I reached out to other approved vendors in our supply base for competitive quotes. Their prices came in significantly lower. I then worked with our engineering and quality teams to qualify the alternative material—a process that took about two months, during which we temporarily absorbed the higher cost. Once qualified, I returned to our supplier with a data-driven proposal. I shared the lower quotes I’d received—roughly 10% below his current pricing—and highlighted the long-term value of our business, which had grown to over $500K annually. I explained that to continue our partnership, he would need to meet the lower pricing benchmark.
    Result: He agreed, contingent on us issuing a blanket purchase order, which would allow him to secure better raw material pricing from his vendor. We agreed to this approach, and as a result, achieved a 10% cost reduction—equating to approximately $50K in annual savings—while maintaining a strong supplier relationship.

  • At Danaher, I worked on a product called the Facemask—an extraoral appliance used in orthodontics, commonly categorized as headgear. It’s not a glamorous product, but it was an area with cost-saving potential.

    I suspected we were significantly overpaying, given we were sourcing the product at $36 per unit. To validate this, I initiated a should-cost analysis by disassembling the product to understand its components and material composition.

    • The wire was stainless steel with a nickel coating

    • The stops were machined and also nickel-coated

    • The chin and forehead pieces were molded from polypropylene

    • The foam pads were molded and coated with tricot fabric

    After analyzing the materials, I factored in labor—done in Mexicali at roughly $2/hour—and applied a conservative overhead estimate of 5%. My full breakdown brought the internal cost estimate to about $16 per unit. Adding a 25% supplier margin, the fair price should’ve been closer to $20 per unit.

    Using this should-cost model, I was able to benchmark against other suppliers and ultimately sourced a new vendor willing to meet the $20 target. This led to a cost reduction of $16 per unit, translating to approximately $240K in annual saving

  • One example that stands out is a project I led to resource natural latex used in our elastic product line at Ormco. The goal was to generate cost savings by shifting from a domestic supplier to one based in a low-cost region. While the potential savings were attractive, the project faced several challenges along the way.

    After nearly two years of effort, including supplier qualification and testing, the new supplier failed validation for a second time. The feedback from our Operations team was that, while the material met the raw material spec, it behaved differently in processing—specifically in its "chop-ability." Our R&D team also flagged that the material didn't meet critical performance criteria.

    At that point, I had to reassess our strategy. Despite the sunk effort, I made the decision to cut the project. Continuing to invest resources into a supplier that couldn’t meet spec after two validation attempts didn’t make business sense. It was a difficult call, especially given the time and collaboration invested by both teams, but it was the right move.

    To offset the anticipated savings we had counted on from the project, I pivoted and re-engaged our existing supplier. I was able to renegotiate terms that ultimately delivered the savings we needed—without compromising product quality or operational stability.

  • At Danaher, I noticed an ongoing issue in manufacturing where we were consuming more material than necessary—specifically spring pins. The ratio of purchased parts to assemblies should have been 1:1, but the BOM included a 5% overage, and I suspected unnecessary waste.

    My task was to understand the root cause and implement a solution to prevent this from continuing.

    I organized a Kaizen event with cross-functional support from R&D, Production, and Engineering. Through value stream mapping, we discovered that the spring pins—small and easy to drop—were difficult for operators to manage. This explained the built-in buffer.

    To address this creatively, we took several actions:

    • R&D developed a containment fixture to help manage the spring pins during assembly.

    • We established standard work to return unused pins back to stock after work orders were completed.

    • We trained operators on the cost impact and importance of material control.

    Additionally, we discovered the supplier was sending about 2% extra pins per shipment. While this was unintentional, we wanted to take advantage of the overage without misrepresenting inventory. We implemented a new receiving process to count and log excess parts before they hit stock.

    Result: These efforts eliminated around 3% of material waste. Given that spring pins cost $0.11 each and annual scrap was estimated at 330,000 pins, the result was a $36,300 scrap reduction. We’ve also instituted monthly reviews to ensure process adherence and continuous improvement.

  • While working at Danaher, we utilized various tools to map the supply chain, particularly value stream mapping. This method provided a comprehensive view of the current state, including process steps, material flow, information flow, and associated metrics.  We mapped this out physically in Kaizen meeting, and then used Microsoft Vizio to capture it virtually. It was instrumental in identifying waste across people, processes, and technology, as well as analyzing operational throughput. For example, if customer demand required 400 units shipped per day but we could only ship 200, the mapping would highlight bottlenecks in the process.

     

    One notable instance involved an assembly operation in Mexicali. Components were metal injection molded in California and sent to Mexicali for assembly and packaging. The bottleneck in our highest-demand item was a manual step: welding tiny brackets. Despite applying standard work to improve throughput, even our best-trained employees couldn’t consistently meet quotas. This revealed the need for a more significant change.

     

    We made a business case for investing in laser welding equipment—a technology not new to the industry but new to our operation. After piloting, conducting verification and validation, and testing at normal production throughput, the laser welder proved five times faster than manual welding. This improvement enabled us to meet customer demand and was subsequently rolled out across all assembly stations and product lines.

     

    From a financial perspective, the investment was a clear success, with the equipment achieving an ROI in under two years. The process exemplified how value stream mapping and targeted investments in technology can drive operational efficiency and meet customer needs effectively.

  • I have extensive experience with procurement and ERP systems, including Coupa, Oracle R12, and various Amazon developed tools that support procure-to-pay (P2P), source-to-contract (S2C), supplier relationship management (SRM), and spend analysis.

     

    At Amazon, we utilize Coupa for indirect purchasing, which provides a catalog of available items and integrates with suppliers like Amazon Business, Grainger, Fastenal, McKesson, and CDW, among others.  Coupa is helpful Because

    ·       Automates the procure-to-pay (P2P) process, reducing manual work.

    ·       Provides real-time spend tracking and analytics to optimize budgets.

    ·       Centralized supplier onboarding and relationship management (SRM).

    ·       Connects with ERP systems (SAP, Oracle, Workday, NetSuite, etc.) for seamless data exchange.

    ·       Automates audit trails to ensure regulatory compliance.

    When managing Indirect categories in my Category Management role, I always began by analyzing the spend landscape. I would export and analyze data, applying the 80/20 rule to identify key suppliers, high-spend items, and recurring purchases. This approach allowed me to focus negotiations where they would have the most impact.

    One example was identifying a sole-sourced vendor responsible for manufacturing the Amazon door desk, a staple in every major Amazon fulfillment center. To drive cost savings and mitigate risk, I took the CAD drawings and initiated a competitive bidding process with multiple manufacturers. This resulted in a 15% cost reduction, leading to $566K in savings, while also reducing supplier risk by establishing a dual-source strategy.

     

    For direct purchasing at Whole Foods, we primarily rely on homegrown technology. Instead of a traditional ERPs, we use microservices architecture, where each component serves a distinct function but integrates seamlessly. These services include:

    • Perpetual inventory tracking

    • Reorder quantity recommendations

    • Vendor master data management

    • Purchase order creation and transmission

    • Three-way match reconciliation

    • E-invoicing

    • General ledger accounting for cost allocation

     

    In terms of process improvements, we focus on automation and optimization to reduce manual effort in determining what to order, from which vendor, and at what time for recurring transactions. Our solutions incorporate:

    • Complex forecasting models to predict demand

    • Deterministic target inventory positioning

    • Vendor selection algorithms to ensure optimal procurement decisions

    This approach ensures product availability while preventing overbuying and reducing shrink, ultimately improving operational efficiency and cost management.

     

Personal

  • I’m most proud of my wife, Anna. She’s one of the hardest-working, kindest, most compassionate and resilient medical intensive care unit (MICU) nurses you’ll ever meet. I’ve learned so much from her—about strength, patience, and the power of perseverance. Her ambition to grow in her career never ceases to amaze me.

    On days when I feel like I’ve been through the wringer (which can be frequent), I remind myself that Anna probably had a day ten times more intense. I’m no stranger to stress, but what she faces daily puts things in perspective. She was recently nominated for a DAISY Award, and I couldn’t be more proud. She is, without question, the best thing in my life—and I’m incredibly grateful for the decisions that led me to her.

    If you had asked me this question before I met Anna, I would have told you I was most proud of myself. I come from a blue-collar background, with little money growing up and not much drive in my early years. I never imagined I’d be where I am today. But I worked hard, paid my way through college, earned two years of full-ride tuition, found a great job with Danaher out of college, landed a role at Amazon, and pushed through a lot of personal challenges along the way.

    There are many struggles I’ve faced—some too personal to get into here—but each of them, no matter how difficult or unfair, helped shape me into a better version of myself. I’m grateful for the journey, the growth, and the future that lies ahead.

    To sum it up, I’ll leave you with a quote that resonates deeply with me:

    "The problems we face today eventually turn into blessings in the rearview mirror."
    Matthew McConaughey

  • I wake up next to my wife, Anna. We joke about who’s going to get the kids up. A game of rock-paper-scissors decides it—of course, I lose. But I’m not mad or ungrateful. Anna’s been working hard as a stay-at-home mom, handling the things that truly matter. I get breakfast going: Star Wars waffles, bacon, and eggs. The kids wake up, we eat together, and talk about the dreams they had the night before.

     

    I head upstairs—Anna’s already showered, dressed, and ready for the day. She takes over with the kids while I shower and get ready for work. Downstairs again, I kiss my wife goodbye and hop in my truck. A podcast plays on the drive in—something that challenges my thinking and pushes me outside my usual patterns.

     

    At the office, I’m not the first one in anymore, but I’m still pretty damn close. I check in with my team, swing by my boss’s office, and connect with the SDMs to ask how their weekends were. When I get to my office, my inbox is manageable—not because there are fewer problems, but because I’ve built a team that’s capable, empowered, and trusted to solve them.

     

    I’m currently working through a complex issue, so I pull the team together to brainstorm solutions. The group includes high performers and some who are still developing, but everyone’s voice matters. I invest in those who are still growing. Every team member has a clear succession path—if they’re willing to put in the work, the opportunity is there.

     

    We operate with transparency. My team knows they can bring anything to me—feedback, complaints, ideas. They don’t need me in the room, but they want me there. Today isn’t a launch day for our new product, but its close and the deployment is on track. SDMs are keeping their teams focused. SDEs know their work matters and that they’re contributing to something bigger. TPMs are running point, turning plans into execution.

     

    Later, I meet with a couple of VPs to talk product strategy—timelines, launch readiness, customer value, and how we’ll operationalize the tech. We align on the vision and execution.

     

    Then I sign off and head home—to my wife and kids, who I love deeply and who, just like at work, have their own challenges and problems that need solving. So I switch gears and dig into the next round of people, process, and tech—this time, at home.