Show HN: An MCP server that gives LLMs temporal awareness and time calculation

github.com

84 points by lumbroso a day ago

This is an open‑source Model Context Protocol (MCP) server that gives any LLM a sense of the passage of time.

Most MCP demos wire LLMs to external data stores. That’s useful, but MCP is also a chance to give models perception — extra senses beyond the prompt text.

Six functions (`current_datetime`, `time_difference`, `timestamp_context`, etc.) give Claude/GPT real temporal awareness: It can spot pauses, reason about rhythms, and even label a chat’s “three‑act structure”. Runs locally in <60 s (Python) or via a hosted demo.

If time works, what else could we surface? - Location / movement (GPS, speed, “I’m on a train”) - Weather (rainy evening vs clear morning) - Device state (battery low, poor bandwidth) - Ambient modality (user is dictating on mobile vs typing at desk) - Calendar context (meeting starts in 5 min) - Biometric cues (heart‑rate spikes while coding)

Curious what other signals people think would unlock better collaboration.

Full back story: https://medium.com/@jeremie.lumbroso/teaching-ai-the-signifi...

Happy to discuss MCP patterns, tool discovery, or future “senses”. Feedback and PRs welcome!

saberience a day ago

This title really doesn't fit what the submission did actually.

The submitter made a basic MCP function that returns the current time, so... Claude knows the current time. There is nothing about sundials and Claude didn't somehow build a calendar in any shape or form.

I thought this was something original or otherwise novel but it's not... it's not complex code or even moderately challenging code, nor is it novel, nor did it result in anything surprising... it's just a clickbaity title.

  • lumbroso a day ago

    Fair point on the metaphor—let me be concrete.

    What’s new here isn’t just exposing `current_datetime()`. The server also gives the model tools to reason about time:

      (1) time_difference(t1, t2)  – exact gaps with human wording  
    
      (2) timestamp_context(t)      – “weekend evening”, “workday morning”  
    
      (3) time_since(t)             – “2 h ago, earlier today”  
    
    I also request that Claude ask for time at every turn, which creates a timeseries that is parallel to our interactions. When Claude calls these every turn it starts noticing patterns (it independently labelled our chat as a three-act structure). That was the surprise that prompted the title.

    Ask Claude “what patterns do you see so far?” after a few exchanges.

    If you still find it trivial after trying, happy to hear why—genuinely looking for ways to push this further. Thanks for the candid feedback.

    Finding a good title is really hard. I'd appreciate any advice on that. You'll notice I wrote the article several weeks ago, and that's how long it took me to figure out how to pitch on HN. I'd appreciate any feedback to improve. Thanks!

  • dang a day ago

    Clearly an honest mistake but yeah a metaphor probably shouldn't be used in a title like this, since many readers will take it literally. I've changed the title now to language from the article.

    (Submitted title was "Show HN: I gave Claude a sundial and it built a calendar")

    • lumbroso 7 hours ago

      Thanks so much for the title change! I completely understand.

      I apologize to the community for the mistake. I appreciate this feature of this community's discourse. I'll remember to use literal, precise language in the future.

      Your reworded title fits perfectly — thank you!

  • fennecbutt a day ago

    That's MCP/AI libraries for ya.

  • whartung a day ago

    Give it a picture of the Sun at the same time every day, and lets see if it comes up with a calendar from that.

  • deadbabe a day ago

    Agreed. I’m tired of these ridiculous claims by people just trying to hype up LLMs. Flagging this article.

    • lumbroso 7 hours ago

      I'm sorry for choosing an inappropriate title — that was my bad, and fortunately @dang helped correct this mistake.

      Aside from the title, what claims do I make that you find ridiculous?

rlupi a day ago

Physical/mental health and personal journaling?

I just finished some changes to my own little project that provides MCP access to my journal stored in Obsidian, plus a few CLI tools for time tracking, and today I added recursive yearly/monthly/weekly/daily automatic retrospectives. It can be tweaked for other purposes (e.g. project tracking) tweaking the templates.

https://github.com/robertolupi/augmented-awareness

  • lumbroso 6 hours ago

    Hey, thanks so much for sharing, your repo is really cool, including the GEMINI.md context engineering file!

    I am curious: You say "offline-first or local-first, quantified self projects", what models do you use with your projects?

    I find the LLMs like the Claude and GPT families to be incredibly impressive for integration and metacognition — however, I am not sure yet what LMs are best for that purpose, if there are any.

    Your "Augmented Awareness" framework seems to be metacognition-on-demand. In practice, how has it helped you recently? Is it mostly automated, or does it require a lot of manual data transfers?

    I am assuming that the MCP server is plugged into a model, and that in the model you run prompts to run retrospectives.

    Have you written about this?

jayd16 a day ago

I was looking for the calendar app that was built but I guess it's metaphorical.

"We made an API for time so now the AI has the current time in it's context" is the bulk of it, yes?

  • lumbroso 5 hours ago

    One‑shot timestamps (the kind hard‑coded into Claude’s system prompt or passed once at chat‑start) go stale fast. In a project I did with GPT‑4 and Claude during a two‑week programming contest, our chat gaps ranged from 10 seconds to 3 days. As the deadline loomed I needed the model to shift from “perfect” suggestions to “good‑enough, ship it” advice, but it had no idea how much real time had passed.

    With an MCP server the model can call now(), diff it against earlier turns, and notice: "you were away 3 h, shall I recap?" or "deadline is 18 h out, let’s prioritise". That continuous sense of elapsed time simply isn’t possible with a static timestamp stuffed into the initial prompt; you'd have to create a new chat to update the time, and every fresh query would require re‑injecting the entire conversation history. MCP gives the model a live clock instead of a snapshot.

  • morkalork a day ago

    The current time (and location of the user, looking at you google gemini) is injected in most LLM chats now isn't it?

    • kridsdale1 a day ago

      At the start. But then human my perceive the rest of the conversation taking many minutes or hours, but the LLM never gets any signal that latter text is chronologically divided from earlier text. It needs a polling API like this.

Disposal8433 a day ago

Again and again, your code lacks the basics of engineering. Where is your package manager and requirements? Your code would never pass any test in a professional context. It's like you haven't went past a Python tutorial and feel the AI output is acceptable.

The docs are pictures, and what is a Pipfile in any context? It looks like a requirement file but you never bothered to follow the news about pip or uv.

Every AI project is like that and I'm really scared for the future of programming.

  • dewey a day ago

    You can program just for fun, without having to make it a professional project. Just like you can do some woodworking without having the goal of becoming a professional carpenter.

    • Disposal8433 a day ago

      Yes you can. Until managers and CEOs demand that you use those tools or you're fired. Whenever I sent such a bad project, I think of what may happen in the next 5 years and its dreadful. We're professionals after all.

      And BTW it's already happening, it's not a fantasy.

      • dewey a day ago

        You can both write hacky projects in your free time and write good, well-tested code in your professional life. It’s not that deep.

        • qingcharles a day ago

          This is how I've always coded. My own projects are like freeform doodles on scrap paper. My professional work is completed, polished commissions.

      • barbazoo a day ago

        What someone builds privately using AI has nothing to do with what expectations organizations decide to put on their employees. This isn't something that will make it into a professional context so who cares if it is in fact shit?!

        Imagine a woodworking forum and someone being called out for showing off their little 6 piece tool box and someone saying how this doesn't adhere to residential building code and what this does for the profession of woodworkers...

      • lumbroso a day ago

        Disposal8433, I am not unsympathetic to your point, but I think that bad managers and CEOs are bad managers and CEOs.

        For instance at Boeing, the fault of software problems lies entirely on the managers: They made the decision to subcontract software engineering to a third party to cut cost, but also they didn't provide the contractor with enough context and support to do a good job. It's not subcontracting that was bad — because subcontracting can be the solution in some circumstances and with proper scoping and oversight — it was the management.

        The MCP protocol is changing every few weeks, it doesn't make sense (to me at least) to professionalize a technical demo, and I appreciate that LLMs allow for faster iteration and exploration.

  • orsorna a day ago

    This really isn't dissimilar to any work I've seen in a professional setting, minus the screenshot docs. I agree those are bad. Everything useful is in the README.

    `uv` is great but `pipenv` is a perfectly well-tested Python dependency manager (albeit slow). Down in the instructions it explicitly asks you to use `pipenv` to manage* dependencies. I also do not think your assertion of "what is a Pipfile in any context" is fair, as I don't think I've ever seen a project list a dependency manager and then explicitly call out artifacts that the dependency manager may require to function.

riedel a day ago

I am giving a lecture on context sensitive systems. One thing where all this context awareness failed was getting it into higher level reasoning and adapting program logic (think for example the android activity API). I was just telling the students that with MCPs as interface to all the context sources (like sensor based activity classifiers but definitely also time) we might overcome that challenge soon. Cool to see starting to implement that kind of stuff...

  • lumbroso a day ago

    That's exactly what I've been thinking too!

    MCP + LLMs = our solution to data integration problems, which include context awareness limitations.

    It's an exciting development and I am glad you see it too!

gmiller123456 a day ago

Not really anything in there regarding the sundial. I'm guessing that was put in there metaphorically for clickbait reasons.

Knowing quite a bit about sundials I was genuinely curious about how that would work, as a typical (horizontal) sundial doesn't have enough information to make a calendar. It's a time of day device, rather than a time of year device. You could teach the model about the Equation of Time or the Sun's declination, but it wouldn't need the sundial at that point. There are sundials like a spider sundial, or nodus sundial, that encode date information too. But there's overlap/ambiguity between the two solstices as the sun goes from highest to lowest, then back to its highest declination. Leap years also add some challenges too. There are various ways to deal with those, but I think you can see why I was curious how producing a calendar from a sundial would work (without giving it some other information that makes the sundial unecessary).

  • lumbroso a day ago

    I'm sorry for the misleading title about a sundial, it was a metaphor, and based on the feedback here, if I had to do it again I would pick a different one. :-)

    My only worry with these MCP "sensors" is that they add-up to the token cost — and more importantly to the context window cost. It would be great to have the models regularly poll as new data and factor that into their inferences. But I think the models (at least with current attention) will always have a trade-off between how much they are provided and what they can focus on. I am afraid that if I provide Claude numerous senses, that it will lower its attention to our conversation.

    But your exciting comment (and again I apologize for disappointing you!) makes me think about creating an MCP server that provides like the position of the sun in the sky for the current location, or maybe some vectorized representation of a specific sundial.

    I think the digitized information that we experience is more native to models (i.e., require fewer processing steps to extract insights from), but it's possible that providing them this kind of input would result in unexpected insights. They may notice patterns, i.e., more grumpy when the sun is in this phase, etc.

    Thanks for your thoughtfulness!

MarkLowenstein a day ago

I love the basic point. Timing based association is fundamental to thinking, across species. How does the bunny knows that you're stalking it? Because your eyes move when it moves. I had no idea that LLMs missed all this. Plus the political reference is priceless.

  • lumbroso 5 hours ago

    Glad the little political wink landed with at least one reader!

    You’re right: Stripping away all ambient context is both a bug and a feature. It lets us rebuild “senses” one at a time—clean interfaces instead of the tangled wiring in our own heads.

    Pauses are the first step, but I’m eager to experiment with other low‑bandwidth signals:

    • where the user is (desk vs. train) • weather/mood cues (“rainy Sunday coding”) • typing vs. speech (and maybe sentiment from voice) • upcoming calendar deadlines

    If you could give an LLM just one extra sense, what would you pick—and why?

cchance 20 hours ago

Why a tool though, why not just append these details onto the context, literally just append "current epoch" timestamp into the context between updates?

  • lumbroso 5 hours ago

    Great question! Injecting a raw epoch each turn can work for tiny chats, but a tool call solves four practical problems:

    1. *Hands‑free integration*: ChatGPT, Claude, etc. don’t let you auto‑append text, so you have to manually do it. Here, a server call happens behind the scenes—no copy‑paste or browser hacks.

    2. *Math & reliability*: LLMs core models are provably not able to do math (without external tools), this is a theoretical limitation that will not change. The server not only returns now() but also time_difference(), time_since(), etc., so the model gets ready‑made numbers instead of trying to subtract 1710692400‑1710688800 itself.

    3. *Extensibility*: Time is just one "sense." The same MCP pattern can stream location, weather, typing‑vs‑dictation mode, even heart‑rate. Each stays a compact function call instead of raw blobs stuffed into the prompt.

    So the tool isn’t about fancy code—it’s about giving the model a live, scalable, low‑friction sensor instead of a manual sticky note.

erispoe a day ago

Claude can run code. Add to your customs instructions to check the time regularly and you're done. Why do you need an MCP?

  • lumbroso 4 hours ago

    It's good idea. I didn't think of it because this project came about a "let's try to write a remote MCP server now that the standard has stabilized."

    But there are some issues:

    1. Cheaper + Deterministic: It is much more costly, both in terms of tokens and context window. (Generating the code takes many more tokens than making a tool call.) And there can be variability in the query, like issues with timezones.

    2. Portability: It is not portable, not all LLM or LM environments have access to a code interpreter. This is a much lower resource requirement.

    3. Extensibility: This approach is extensible, and it allows us to expand the toolkit with additional cognitive scaffolds that help contextualize how we experience time for the model. (This is a fancy way of saying: The code only gives the timestamp, but building an MCP allows us to contextualize this information — "this is time I'm sleeping, this is the time I'm eating or commuting, etc.")

    4. Security: Ops teams are happier approving a read-only REST call than arbitrary code running.

  • lumbroso 4 hours ago

    One last thing I will say: The MCP server specification is unclear how much the initial "instructions", the README.md of the server for the model, is discovered. In the "passage-of-time" MCP server, the instructions provide the model with information on each available tool as well as the requirement to poll the time at each message.

    In practice, this hasn't really worked. I've had to add a custom instruction to "call current_datetime" at each message to get Claude to do it consistently over time.

    Still, it is meaningful that I ask the model to make a single quick query rather than generate code.

  • daveguy a day ago

    Because LLMs are notoriously unreliable especially over long context. Telling it something like "check the time before every turn" is going to fail after enough interactions. MCP call is more reliable for programmatic and specific queries, like retrieving the time.

cwmoore a day ago

I would argue that "that gives any LLM a sense of the passage of time" is but a suspension of disbelief and metaphorical hope.

For those looking for "a calendar", here is one[0] I made from a stylized orrery. No AI. Should be printable to US Letter paper. Enjoy.

EDIT: former title asserted that the LLM built a calendar

[0] https://ouruboroi.com/calendar/2026-01-01

  • lumbroso 4 hours ago

    Without engaging in the whole "anthropomorphizing" debate in this post, I'll say I reject the framing, for many reasons I'd be happy to discuss.

    At the same time I understand what you mean and I agree that no, this does not give any LLM any sense of anything, in the same way that we conceive it. But it provides them context with take for granted in service of further customizing their outputs.

    Your "calendar" is nice, thanks for sharing. :)

cjlm a day ago

The sycophancy from Claude is incredibly jarring. I agree with Ethan Mollick that this could turn out to have more of a disastrous impact than AI hallucination.

https://www.linkedin.com/posts/emollick_i-am-starting-to-thi...

  • unshavedyak a day ago

    It's even a blocker for some design patterns. Ie it's difficult to discuss options and choose the best one when the AI agrees with you no matter what. If you ask "But what about X" it is more likely to reverse course and agree with your new position entirely.

    It's really frustrating. I've come to loathe the agreeable tone because every time i see it i remember the times where i've hit this pain point in design.

    • ghc a day ago

      I absolutely hate this too. And the only way around it is to manipulate it into cheerfully pointing out all the problems with something in a similarly sycophantic way.

    • danielbln a day ago

      If found that three words help "critical hat on". Then you get the real talk.

      • jayd16 a day ago

        What a ridiculous world we live in.

  • shagie a day ago

    In my ChatGPT customization prompt I have:

        Not chatty.  Unbiased.  Avoid use of emoji.  Rather than "Let me know if..." style continuations, list a set of prompts to explore further topics.  Do not start out with short sentences or smalltalk that does not meaningfully advance the response.
    
    I want an intelligent agent (or one that pretends to be) that answers the question rather than something that I chat with.

    As an aside, I like the further prompt exploration approach.

    An example of this from the other day - https://chatgpt.com/share/68767972-91a8-8011-b4b3-72d6545cc5... and https://chatgpt.com/share/6877cbe9-907c-8011-91c2-baa7d06ab4...

    One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now).

    However, that's a me thing - something that I do (or avoid doing) with how I interact with an LLM. As noted with the stories of people following the advice of an LLM, it isn't something that is universal.

    • lumbroso 4 hours ago

      Thank you so much for sharing your customizations and conversations, it is really fascinating and generous!

      In both of your conversations, there is only one depth of interaction. Is that typical for your conversations? Do you have examples where you iterate?

      I think your meta-cognitive take on the model is excellent:

      "One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now)."

      The only thing I would add is that, as a reference librarian, it can surface template decision-making patterns.

      But I think it's more like that cognitive trick where you assign outcomes to the sides of a coin, and you flip it, and see how you brain reacts — it's not because you're going to use the coin to make the decision, but you're going to use the coin to induce information from your brain using System 1.

      • shagie 2 hours ago

        I do have some that I iterate on a few times, though their contents aren't ones that I'd be as comfortable making public.

        In general, however, I'm looking for the sources and other things to remember the "oh yea, it was HGS-1" that I can then go back and research outside of ChatGPT.

        Flipping a coin and then considering how one feels about the outcome and using that to guide the decision is useful. Asking ChatGPT and then accepting its suggestion is problematic.

        I believe that there's real damager in ascribing prophecy, decision making, or omniscience to an LLM. (aside: Here's an iterative chat that you can see leading to help picking the right wording for this bit - https://chatgpt.com/share/68794d75-0dd0-8011-9556-9c09acd34b... (first version missed the link))

        I can see it as something that's real easy to do. And even back to Eliza and people chatting with that, and I see people trusting the advice as a way of offloading some of their own decision making agency to another thing - ChatGPT as a therapist is something I'd be wary of. Not that it can't make those decisions, but rather that it can't reject the responsibility of making those decisions back to the person asking the question.

        To an extent, being familiar with the technology and having the problems of decision fatigue ( https://en.wikipedia.org/wiki/Decision_fatigue ) that, as a programmer, I struggle with in the evening (not wanting to think anymore since I'm all thought out from the day)... ChatGPT would be so easy to let it do its thing and make the decisions for me. "What should I have for dinner?" (Aside: this is why I've got a meal delivery subscription so that I don't have to think about that because otherwise I snack on unhealthy food or skip dinner).

        ---

        One of the things that disappointed me with the Love, Death & Robots adaptation of Zima Blue ( https://youtu.be/0PiT65hmwdQ ) was that it focused on Zima and art and completely dropping the question of memory and its relation to art and humanity (and Carrie). The adaptation focuses on Zima's story arc without going into Carrie's story arc.

        For me, the most important part of the story that wasn't in the adaptation follows from the question "Red or white, Carrie?" (It goes on for several pages in a socratic dialog style that would be way too much to copy here - I strongly recommend the story).

  • lumbroso 3 hours ago

    First, I think various models have various degrees of sycophancy — and that there are a lot of stereotypes out there. Often, the sycophancy, is a "shit sandwich" — in my experience, the models I interact with do push back, even when polite.

    But for the broader question: I see sycophancy as a double‑edged sword.

    • On one side, the Dunning–Kruger effect shows that unwarranted praise can reinforce over‑confidence and bad decisions.

    • On the other, chronic imposter syndrome is real—many people underrate their own work and stall out. A bit of positive affect from an LLM can nudge them past that block.

    So the issue isn't "praise = bad" but dose and context.

    Ideally the model would:

    1. mirror the user's confidence level (low → encourage, high → challenge), and

    2. surface arguments for and against rather than blanket approval.

    That's why I prefer treating politeness/enthusiasm as a tunable parameter—just like temperature or verbosity—rather than something to abolish.

    In general, these all-or-nothing, catastrophizing narratives in AI (like in most places) often hide very interesting questions.

  • organsnyder a day ago

    I'm struck by how often Claude responds with "You're right! Now let me look at the file..." when it can't know whether I'm right until after it looks at the file in question.

  • itemize123 17 hours ago

    What am I missing? I am not seeing this particular example as sycophantic. Claude is saying, something like user's assertion is improbable but if it was the case, user needs to show/ prove some of the things in this table.

  • CamperBob2 a day ago

    They have introduced a beta 'Preferences' feature recently under Custom Instructions. I've had good results from this preference setting in GPT:

        Answer concisely when appropriate, more 
        extensively when necessary.  Avoid rhetorical 
        flourishes, bonhomie, and (above all) cliches.  
        Take a forward-thinking view. OK to be mildly 
        positive and encouraging but NEVER sycophantic 
        or cloying.  Above all, NEVER use the phrase 
        "You're absolutely right."
    
    I just copied it into Claude's preferences field, we'll see if it helps.