Open Telemetry founder tools up for project graduation party

We gotta get boring to get graduated Grafanacon The founder of the Open Telemetry project says its maintainers may need to turn to AI tools to get some elements robust enough for the project as a whole to graduate.…

Open Telemetry founder tools up for project graduation party
Open Telemetry founder tools up for project graduation party Photo: The Register

We gotta get boring to get graduated
Grafanacon The founder of the Open Telemetry project says its maintainers may need to turn to AI tools to get some elements robust enough for the project as a whole to graduate.

Ted Young was speaking at Grafanacon in Barcelona, where he told the audience the aim for the next year was to make Open Telemetry as "boring" as possible so that it can finally become a fully graduated CNCF project.

"That might sound crazy, but it is crazy," he said.

"It's important to remember, though, with telemetry, the opposite of boring isn't interesting, it's frustrating.

So when you think about it, boring is amazing, and it's harder to be boring than one might imagine."
More practically, he said, "Our top priority is to stabilize all the things." By which he meant all the elements that make up the Open Telemetry ecosystem.

"What could be more boring than this, not changing things?” he continued.

"But it's actually amazing, because it means that we've reached the end of the road for the original goals of the project, tracing, metrics and logs, stable, unified everywhere.

These are the final pieces that open telemetry needs to finally graduate from the CNCF."
He pointed out that Open Telemetry had been in production for years, "and we've made quite a bit of de facto stable software along the way."
But some organizations had security rules that "Ban the installation of software marked beta."
That means everything important in open telemetry needs to become 1.0, he said.

"So, the final boss of the stabilization effort actually isn't the collector or the SDKs or any other core components.

It's the instrumentation."
But Young explained this is where the surface area gets enormous.

"The real heavy lifting comes in rolling out all of those stable semantic conventions to all the actual instrumentation packages in every single language."
This is achieved in a two-stage rollout, he said.

Most instrumentation packages are "de facto stable," and can safely be marked 1.0.

"But the data could be better."
"Stage Two will be lifting that data up everywhere to the latest version of the semantic conventions [for defining data] once they become available via another major version bump."
This is harder than it sounds, Young added.

"We're going to need to invent new tools and potentially apply new coding techniques in order to handle the scale of instrumenting all the software in the planet."
Speaking to The Register , Young said he would like to get this tackled this year.

"Feedback we've gotten from the community as part of doing research for the graduation was that that we were basically being over cautious by including data stability as part of our version numbers.

People want to just know, is this code safe to run in production?"
This means dealing with the long tail of instrumentation packages for each language the project supports.

Many of these were contributed to the project, which means they could demand a massive expansion in the number of maintainers.

"So, we're also looking at is there tooling we can write using Weaver and the stuff that's coming out of the semantic conventions tooling group to help make it easier to maintain all this software?" And, "Are there ways we can use AI coding techniques and things of that nature to lower the load."
The aim is to increase automation, so when the semantic conventions are updated, libraries can be automatically updated.

"So it becomes less about having to write the code and more about just needing to be able to review these things."
Ironically, he said that the explosion in AI coding had increased the burden on maintainers, as they deal with AI slop and badly written code.

Some contributors might think that spewing out AI generated pull requests meant they were more productive, he said.

"But from the perspective of the maintainers, they're like, 'You're just wasting my time.' We've seen the need to add new tools in open telemetry.

Basically, you're watching the maintainers start to turn into mods like Discord." ®

Source: This article was originally published by The Register

Read Full Original Article →

Share this article

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

Maximum 2000 characters