I didn’t see this coming and I think it’s funny, so I decided to post it here.
gotta keep wirth’s law going strong
Announcing FemtoServices™ - One Packet at a Time!
In an era of bloated bandwidth and endless data streams, today we proudly unveil a groundbreaking approach to networking: FemtoServices™ – Connectivity, one Ethernet packet at a time!
(Not to be confused with our premium product, ParticleServices, which just shoot neutrinos around one by one.)
I was going to write that every function should be a service as sarcasm, then I realized that’s exactly what this article is proposing. Now I’m not even sure how to make a more ridiculous proposal than this.
Why would your whole function be 1 service? That is bad for scalability! Your code is bad and the function will fail 50% of the time half way through anyway. By splitting up the your function in different services, you can scale the first half without having to scale the second half.
It’s probably AI-supported slop.
Yeah, I had been willing to give the author the benefit of the doubt that this was all part of a big joke, until I saw that the rest of their blog postings are also just like this one.
Ah, you’re right
Cant wait to set up a docker container for a service which takes a string input and transforms it into a number as the output. Full logging, its own certificate for encryption of course, 5 page config options and of course documentation. Now, you want to add two numbers together? You got the addition service set up right?
left-pad as a service.
It’s a modern day enterprise fizzbuzz: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
Nano services are microservices after your company realizes monoliths are much easier to maintain and relabels their monoliths as microservices.
Unironically. I’d put a significant wager down on that being the source of this term.
That’s exactly what happens at my job.
quantum services
take your source code and put each character in its own docker container
this gives you the absolute peak of scalability and agility as every quantum of your application is decoupled from the others and can be deployed or scaled independently
implementing, operating and debugging this architecture is left as an exercise for the reader
that will be $250,000 kthx
implementing, operating and debugging this architecture is left as an exercise for the reader
Challenge accepted by a reader using AI, what could go wrong? xD
Neovimservices ftw
This “article” was written by AI, wasn’t it? This is just throwing vague buzzwords around
I dunno people were doing that long before AI
That intro and general structure (AI loves bulleted lists but then again so do I) sure sound like a lot of the responses I’ve gotten. As always, it’s hard to say for sure.
deleted by creator
I’m trying to understand how this is different than a concept I learned in computer science in the late 80s/early 90s called RPCs (remote procedure calls). My senior project in college used these. Yes I’m old and this was 35 years ago.
Microservice architectures are ad hoc, informally-specified, bug-ridden, slow, implementations of Erlang, implemented by people who think that “actor model” has something to do with Hollywood.
Ok this made me chuckle out loud.
Planck services
My services are so small that it is impossible to know just how fast they are running!
I am now offering Planck services for sale, at US$0.0001 per bit.
For an extra fee, you can even choose the value of the bit.
This is just distributed functions, right? This has been a thing for years. AWS Lambda, Azure Functions, GCP Cloud Functions, and so on. Not everything that uses these is built on a distributed functions model but a fuck ton of enterprises have been doing this for years.
We already have nanoservices, they’re called functions. If you want a function run on another box, that’s called RPC.
You know what they say: micro services, macro outages.
Tech moved in cycles. We come back to the same half-baked ideas every so on, imagine we just discovered the idea and then build more and more technologies on top to try to fix the foundational problems with the concept until something else shiny comes along. A lot of tech work is “there was an old lady who swallowed a fly”.
I always keep saying " You cannot plan your way out of a system built on broken fundamentals." Microservices has it’s use case, but not every web app needs to be one. Too many buzzwords floating around in tech, that promise things that cannot be delivered.
we’ve been using nano-services for the past 6 months or so. Two different reasons. A codebase we absorbed when a different team was dissolved had a bunch of them, all part of AWS AppSync functions. I hate it. It’s incredibly hard to parse and understand what is going on because every single thing is a single function and they all call each other in different ways. Very confusing.
But the second way we implemented ourselves and it’s going very well. We started using AWS Step Functions and it allows building very decoupled systems by piecing together much larger pieces. It’s honestly a joy to use and incredibly easy to debug. Hardest part is testing, but once it’s working it seems very stable. But sometimes you need to do something to transform data to piece together these larger systems. That’s where ‘nano-services’ come in. Essentially they’re just small ruby, python, js lambdas that are stuck into the middle of a step function flow in order to do more complex data transformation to pass it to the next node in the flow. When I say small I mean one of the functions we have is just this
def handler(event:, context:) if event['errorType'] clazz = Object.const_set event['errorType'], Class.new(StandardError) raise clazz.new.exception, event['errorMessage'] end event end
to map a service that doesn’t fail with a 4xx http code to one that does fail with a 4xx http code.
You could argue this is a complete waste of resources, but it allows us to keep using that other service without any modifications. All the other services that depend on that service that maps its own error types can keep working the way they want. And if we ever do update that service and all its dependencies, now ‘fixing’ the workflow is literally as simple as just deleting the node and the ‘nano-service’ to go along with it.
I should note that the article is about the first thing I discussed, the terrible codebase. Please don’t use nano-services like that, it’s literally one of the worst codebases I’ve ever touched and no joke, it’s less than 2 years old.
This looks like hell.
I’m a C/C++ developer though.
You can write your glue nano-service in c/c++ if you want, it’s just that: glue. It doesn’t matter as long as you don’t need to change the original services which also can be written in whatever you want. Ruby, Python, JS just work out of the box with aws lambda and you don’t really have to maintain them or any sort of build infra so it allows for very little maintenance or upkeep cost. You don’t really test these glue lambdas either.
Things won’t be simpler just because you cut everything up in tiny tiny pieces (I mean it will be easier because it solves some surface level problem right now, pushing the real problem down the road), it creates a complexity of its own.
You didn’t read what I wrote at all.
It’s easy to say I didn’t read your message, which I obviously did (why write lies like that?), just because you don’t understand my point.