I planned my Japan trip with AI — here is what actually happened
Part of the How to use AI to plan your next trip (and what it gets wrong) guide
A first-person account of using ChatGPT to plan a Japan itinerary: the prompts, what worked, what was confidently wrong, and what had to be fixed.
The trip had been on my mind for two years before I actually booked it. Japan in late March — cherry blossom season, the whole thing — twelve days, travelling with my partner who had never been to Asia before. I'd done a fair amount of travel research in my time but I was curious whether AI could meaningfully accelerate the planning process. So instead of opening seventeen tabs and spending three weeks reading blogs, I decided to do as much of the research as possible through ChatGPT and see what happened.
What happened was instructive. Some of it in the way I expected. Some of it not.
The first prompt, and what came back
I started the way most people start, which is to say I started badly. My first message was: "Plan a 12-day Japan itinerary for late March, cherry blossom season, two people, first time in Japan."
The response was fast, confident, and almost entirely generic. Tokyo for four days, Hakone for two (Mount Fuji views), Kyoto for three, Nara as a day trip, Osaka for two. It listed Senso-ji, the Tsukiji outer market, Arashiyama, Fushimi Inari. It suggested a "ryokan experience" without naming one or explaining what booking a good ryokan actually involves. It mentioned cherry blossom viewing spots in each city.
It was the itinerary a well-read person with no specific knowledge of my preferences would give me. Which is to say it wasn't wrong, exactly, but it wasn't useful in the way I needed it to be. I could have found this on the first page of any Japan travel blog.
The problem wasn't the AI. It was the prompt. I had given it nothing to work with beyond the basics, so it had produced the statistically average answer. This is the most common mistake in AI travel planning — and understanding it changed how I used the tool for everything that followed.
The second attempt: giving it something real to work with
I started over with a longer message. I told it that my partner found crowds overwhelming and needed recovery time after intense days. That I was interested in Japanese design, craft, and food but not nightlife. That we'd both done Southeast Asia and weren't looking for a backpacker experience. That our budget was comfortable but not extravagant — happy to pay for a good ryokan but not interested in luxury hotels just for the category. That we actively wanted to avoid feeling like we were on a tourist conveyor belt.
The response was noticeably different. It cut Hakone and suggested Nikko instead as a day trip from Tokyo — less scenically dramatic but more culturally interesting, it said, given my stated interests. It recommended staying in a smaller neighbourhood ryokan in Kyoto rather than a central hotel, with specific reasoning about what that would mean for the experience. It suggested leaving afternoons unscheduled rather than stacking activities morning to evening. It flagged that Fushimi Inari is genuinely special but only before 7am — at 10am it's a queue.
This version was actually useful. Not perfect, but a real starting point. The difference between the two responses was entirely in the context I had given. I filed this away.
What the AI got genuinely right
Over the next few sessions — I treated it as an ongoing research conversation rather than a one-shot query — there were several places where it was substantially better than the alternative.
Logistics reasoning was the clearest one. I asked it to work out whether our route made sense geographically: Tokyo, Nikko day trip, then bullet train to Kyoto, day trip to Nara, then Osaka for the last two nights before flying home from Kansai International. It immediately pointed out that I had the Osaka leg back-to-front — that the journey from Kyoto to Osaka is about 15 minutes on the shinkansen, which meant I could stay in Kyoto an extra night and check into Osaka on the final morning rather than losing a full day to an unnecessary move. That single suggestion saved a day.
It was also good at surfacing things I hadn't known to ask about. When I mentioned we wanted to use an onsen, it flagged the tattoo policy issue — my partner has a small wrist tattoo — and told me to specifically look for ryokan with private baths (kashikiri buro) rather than communal ones. I hadn't known this was a category. It saved what could have been an awkward situation.
And it was useful for the questions that feel too small to warrant research but add up into a significant part of the trip experience. What IC card should we get and where. Whether to get pocket wifi or a SIM. What the difference is between the various shinkansen service types on the Tokaido line, and which ones the JR Pass doesn't cover. These are questions where the answer is largely stable, the information is dense, and AI synthesises it faster and more clearly than reading four forum threads.
What was confidently wrong
Here is where it gets more interesting.
At one point I asked it to recommend a specific ramen shop in Tokyo — somewhere I could take my partner for a proper bowl on the first evening, not a tourist-facing restaurant, somewhere locals actually go. It gave me the name of a place in Shinjuku with a confident description: small, no English menu, excellent tonkotsu, expect a short queue. It sounded exactly right.
The place had closed eight months earlier. I found this out by searching the name on Google Maps and finding a cluster of reviews from people who had also made the same pilgrimage.
This is the core reliability problem with AI restaurant recommendations and it's worth being direct about: AI training data has a cutoff, restaurant quality changes, and restaurants close. The confidence of the recommendation gives you no signal about its accuracy. For this specific category — specific restaurant names — you need Google Maps, Tabelog, or a human source with recent experience. AI will tell you what kind of restaurant to look for and what neighbourhood to look in. It cannot reliably tell you which door to walk through.
There was also a smaller, more pervasive problem throughout the planning. The AI had no sense of how much a day takes out of you. When I described wanting to visit the bamboo grove at Arashiyama, it suggested combining it in the same morning with the Nishiki Market, Gion at dusk, and dinner in the Higashiyama district. That's a full day, not a half-day add-on. In isolation each suggestion was good; stacked together in one day they would have produced the exact tourist-conveyor-belt feeling I had explicitly said I wanted to avoid.
Every time I pushed back — "that feels like too much for one day" — it adjusted agreeably. But it never volunteered the concern on its own. The AI had no working model of what it actually feels like to walk eight hours in unfamiliar heat while processing constant sensory novelty in a foreign country. That has to come from you.
The thing I had to find out the hard way
I want to be specific about the biggest practical failure in my AI-assisted planning, because it's avoidable if you know to look for it.
At no point during any of my research sessions did the AI flag that Kyoto accommodation in cherry blossom season needs to be booked five to six months in advance. Not as a footnote. Not as a caveat at the end of its accommodation suggestions. It gave me ryokan recommendations with descriptions and rough price brackets, and I assumed — because nothing indicated otherwise — that I had a reasonable window to book.
I started trying to book in late January for a late March trip. Nearly every ryokan I had shortlisted was fully booked. The ones with availability were either out of our budget or in locations we hadn't planned around. We ended up with a good hotel that wasn't what we had wanted for the Kyoto nights, and it cost more than the ryokan would have.
This is exactly the kind of experiential, time-sensitive knowledge that AI doesn't hold reliably — not because the information doesn't exist, but because booking urgency is contextual and the AI has no mechanism for understanding that my planning timeline was already tight relative to my travel dates. A well-travelled friend would have said this in the first five minutes. The AI said it never.
The broader case for and against AI in travel planning covers this pattern in more depth — the gap between what AI is genuinely good at and where it needs human experience alongside it. And for anyone starting from scratch, what a proper Japan research process looks like fills in the kind of structural knowledge that makes AI planning sessions more productive.
The honest truth about AI and travel pace
The itinerary that came out of my AI sessions was, after several rounds of refinement, a good skeleton. The city order made sense. The balance between Tokyo and Kyoto was right. The Nikko day trip was a genuine improvement on the Hakone version I'd been initially given. The logistics were solid.
What it couldn't do — what I don't think any general AI tool currently does well — is understand travel pace. How a long travel day followed by a visually intense afternoon at a temple complex followed by an evening walking a new neighbourhood leaves most people genuinely depleted the next morning. How cherry blossom season in Kyoto, for all its beauty, involves crowds that change the character of the city and require more recovery time than a quieter visit. How two people travelling together have different energy levels and thresholds, and an itinerary that works for one might overwhelm the other.
These are things you have to build into your own planning. The AI can't infer them without extensive prompting, and even then it tends to optimise for coverage rather than pacing.
This is why something purpose-built works differently. Budge is essentially a travel researcher you can have a conversation with — it's what I built because I was tired of piecing together 12 tabs. This is exactly the kind of research rabbit hole it was built for — you can ask it follow-up questions and it remembers what you care about across the whole conversation, including the parts about needing slow mornings and avoiding tourist conveyors.
The Japan trip was good, in the end. Better than I'd feared after the accommodation near-miss. My partner loved Nikko. We found a ramen shop in Yanaka on the second day — no ChatGPT involved, just walking past the right door at lunchtime — that became the meal we talked about most on the flight home.
AI got us a workable framework. The rest, as always, was being there.
Plan your own trip with AI
Budge turns a conversation into a full travel plan — flights, hotels, budget, and everything in between.
Start planning for free →