Host Chris Adams is joined by Niki Manoledeki of Grafana and Ross Fairbanks of Flatpeak in this edition of TWiGS focused on Carbon Aware Spatial Shifting. They dive into Amazon's 2022 Sustainability Report, highlighting 19 AWS regions powered by 100% renewable energy, and explore videos from the Linux Foundation energy summit (links below). They also discover the importance of measuring carbon footprints in personal computing and IT, and learn about Kepler Power Estimation and the PLATYPUS Attack. Plus, they share some exciting upcoming events from the CNCF and some interesting Barbenheimer inspired portmanteaus from the world of Green Software!
Host Chris Adams is joined by Niki Manoledaki of Grafana and Ross Fairbanks of Flatpeak in this edition of TWiGS focused on Carbon Aware Spatial Shifting. They dive into Amazon's 2022 Sustainability Report, highlighting 19 AWS regions powered by 100% renewable energy, and explore videos from the Linux Foundation energy summit (links below). They also discover the importance of measuring carbon footprints in personal computing and IT, and learn about Kepler Power Estimation and the PLATYPUS Attack. Plus, they share some exciting upcoming events from the CNCF and some interesting Barbenheimer inspired portmanteaus from the world of Green Software!
Learn more about our people:
Find out more about the GSF:
If you enjoyed this episode then please either:
Niki Manoledaki: When you have that kind of data at your disposal that you didn't previously have, it can really tell a story that you can show to someone else and say, Hey, look at this dashboard that you just see how the energy consumption and temperature and CPU usage correlate with each other. And I think it's fascinating, and I hope we see more of these visualizations.
Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
I'm your host, Chris Adams.
Hello, and welcome back to the Week in Green Software on Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. Today, we're diving into Amazon's 2022 sustainability report. And we'll be exploring Carbon Aware Spatial Shifting with Karmada, Kubernetes, and a new real time carbon footprint standard.
And we'll also be covering a few future events with Green Software. But before we dive in, let me introduce my guests for this episode of This Week in Green Software. With us today, we have Niki Manolodaki. Hi, Niki.
Niki Manoledaki: Hi, it's so nice to be on this podcast. I'm a long time listener, so I'm very excited to be here.
Chris Adams: And we also have Ross Fairbanks. Hey, Ross.
Ross Fairbanks: Hi everyone, I'm also another long term listener, so yeah, excited to be here.
Chris Adams: Cool. All right, before we start, I guess maybe we should do a quick round of introductions for what we do and what we work on, and then we'll just get right into the format of running through some of the news stories that caught our eyes and sharing a few kind of lukewarm to hot takes, depending on how we're feeling.
Okay, Niki, are you okay with me just handing over to you first?
Niki Manoledaki: Yep, so hi, I'm a software engineer at Grafana Labs and I'm working on also the back end of Grafana itself. I was previously at Weaver. It's where I did work on EKSCTL, the CLI for Elastic Kubernetes service. So excited to talk about um, the progress with AWS today, and I'm also a maintainer of Kepler, which we'll talk about very soon as well, and part of the CNCF TAG, the Technical Advisory Group of the Cloud Native Computing Foundation for Environmental Sustainability.
So we have a couple of things coming up there as well. We have the global week of sustainability in the second week of October where we'll have a bunch of local meetups on the world during the same week to talk about sustainability in cloud computing and we have a new working group in the TAG. That's we'll talk about as well.
Chris Adams: Cool, exciting. Alright, thank you for coming on then, Niki. And Ross, I know that we've worked together a few times, but for listeners who have not been tracking the repos that we end up messing around in, maybe you can introduce yourself and provide some background as well.
Ross Fairbanks: Yes, yeah, I'm a developer at Flatpeak Energy currently, but I've also worked with Chris at Green Web Foundation on various projects there. The main ones really would be Grid Intensity Go, which is a Go library for carbon density metrics, and also has a Prometheus exporter. I've also worked a bit on Scaphandre as well, which I think we're going to talk a bit about later as well, on some of the Kubernetes integration there. I'm trying to learn some Kepler Kepler to learn it from Niki back there.
Chris Adams: Cool, I guess learning from the horse's mouth, as it were, or whatever animal metaphor we're going to use for this. Okay, folks. For, if you've never listened to this podcast before, my name is Chris Adams. I am the executive director of the Green Web Foundation. And I'm also the policy chair for the Green Software Foundation.
I'm also one of the maintainers of co2.js, a library for calculating the environmental impact of digital services. I help organize an online community called ClimateAction.tech, where a number of climate aware techies tend to hang out. So if you haven't listened to this podcast before, the general format is we run through some of the stories that have caught our eye over the last week or so. And sometimes this will be suggestions from the actual guests themselves.
And, uh, I think we're just going to start off with one of the big ones, which has made some news in the last week or so, is the Amazon 2022 sustainability report.
So this, this was released a week or two ago. And, uh, there's a few kind of relatively large like, findings from the report that comes out each year. And what we've linked to is a summary of a blog post by the previous VP of Cloud and VP of Sustainability there, Adrian Cockcroft. There's a few highlights.
One of the key things is that Amazon are now claiming that 19 of their regions are running on 100% renewable energy, which is a increase. a significant increase from the 13 from the year before. They've also done something interesting in that they are now being much, much clearer about which regions are running on what they count as 100% versus over 95%.
You can see a few new regions in both India and China, which is a real shift and we've got one in Spain as well now actually as well. So Spain and Zurich. The other thing that it might be worth sharing is when you look through this report, this is the first time that you've actually seen Amazon show a reduction in emissions year on year.
So this is It's actually one of the largest companies in the world shifting. So this is actually a really significant view here. Now, the other thing that it might be worth talking a little bit about is that when we talk about renewable energy here, Amazon is using this market based method and the blog post we've linked to talks a little bit about how there are different ways of measuring the environmental impact of electricity, whether something is location based, where you look at the energy.
from the grid specifically, or you use a market based approach, which takes into account that you have seen significant investments in renewable energy by various organizations to speed up a transition. Ross, I know that you've had a chance to skim over there, and in the context of working with Scaphandra and trying to expose metrics.
Is there anything that caught your eye here?
Ross Fairbanks: Yeah, so what I find interesting was the part about as we will use more renewable energy, the scope 3 emissions become more important and how it's really hard to get data for that. And I think it's really interesting to what the Bovista project are doing, where they're producing an open data set of the embodied carbon in devices, using the data they get from the manufacturers kind of cryosourcing information. And I think as scope 3 emissions become more important, these sort of projects will become increasingly important.
Chris Adams: Okay, so you've used a couple of words that I think we might need to just break down when we talk about this. So we spoke about scope three emissions here, and these might be considered like supply chain inside something. So while there's a carbon footprint from obviously burning fossil fuels to generate power, to generate electricity, you might consider that scope two here, or if you have to burn, say, fossil fuels to run a generator, then that might be scope one in this scenario.
And actually this is one thing that is mentioned in this report is a shift to using biofuels rather than fossil fuels for running backup generators. Scope three, if I, as I understand it is all the supply chain. So that's all the emissions caused from making a server in the first place. That's what you're referring to in this case.
Ross Fairbanks: Yes, so it's scaling the emissions for those hardware devices.
Chris Adams: Okay, cool. Thank you for clearing that up. All right. Okay. There's one other thing that I just might draw your attention to that really caught my eye on this is, this is not Amazon's report specifically. There's a kind of corresponding link from this for, from a website called energymonitor.ai and, uh, they're basically quantifying the amount of renewable energy being purchased in various sectors. And for the last, say, 10 years, one of the big things has been that technology firms themselves have been basically the largest investor in renewables. But we've seen another shift in the last year in that we've actually seen heavy industry moving to actually eclipse this.
But even now, despite that. Between 2023 and 2022, Amazon is still making up for like 20% of all the renewable energy being bought in 2020. And this is the figure that kind of blew my mind was two thirds of all the investments in renewable power right now is coming from Amazon. And this complicates the matter somewhat because for a long time we've generally seen Amazon as being one of the kind of laggards here.
But one thing we see from here is basically that it's more a function of the size because they're so large and there's so much to be moving they can still be investing a significant amount and still not be as moving proportionally as far as some of the other companies that gives you an idea the size of the change we need to actually be making.
And we spoke a little bit about Kepler and Scaphandre and stuff like that and I wanted to just just see if we can jump into the next story from this actually. So we spoke a little bit about We've mentioned at previous, uh, episodes, we've spoken about the Linux Foundation Energy Summit. And, uh, there was a bunch of really interesting talks given there.
But the recordings of these talks are now online for people to see. And, uh, there was one talk, which is a particular reference, which is one from, uh, person called Aditya Manglik at ETH Zurich. He was talking measuring the carbon footprint of personal computing. Now, I don't know if you've actually seen any of this, but this one really caught my eye because this was someone basically saying, look, we need to have ways of reporting the environmental impact of software at a kind of computing level.
He was talking about okay, Windows has all these tools, and OS X has all these tools, but what we really need is something to run that path for all the servers in the world. And when I spoke to him, he didn't know that much about Kepler at the time, but that was a new thing for him. He's now looking into this.
And I figured this might be something that might be in your wheelhouses, folks, because, as I understand it, Kepler is one of the projects which this person was actually essentially calling for. What we need is something that works at Linux's level to actually start reporting these numbers. And, uh, Niki, is that somewhat related to what Kepler does?
Niki Manoledaki: So what Kepler does is it leverages EBPF to look at the kernel level syscalls and performance counts and it's attributes of those with Kubernetes resources. So looking at the energy processes, for example, in RAPL in the kernel is not something that is necessarily new and there are other tools such as Scaphandre that also do this.
What's new with Kepler is that this attribution of the energy consumption with workloads running in a container. So that's really what's changing things for at least in the cloud native ecosystem is this part and to add to this, I would like to mention this one really interesting study called Measuring IT Carbon Footprints What is the Current Status Actually? Which came out in June of 2023 on Tom Kennes' sorry. I mispronounced your last name, Tom, but he's very active in the TAG for environmental sustainability and what is interesting to notice what we just discussed previously with reporting carbon through AWS is a top down carbon monitoring, whereas what Kepler does and what the talk that you just mentioned, what it focuses on is bottom up carbon monitoring or energy monitoring first, because that would be the first step infrastructure.
So that bottom up approach to energy and by accent carbon monitoring is much more useful for engineers. So it's really talking about the persona in observability of who are those metrics for and what are they used for. So we see top down carbon reports useful for carbon accounting for the center operators for perhaps CFOs or whoever is reporting, whoever is using those reports, but for engineers who are optimizing low level software, Kepler is much more useful in those use cases.
So, Okay, cool. Thank you for this. And I just want to ask, Kepler, yeah, that's a reference to the astronomer from a few hundred years ago, but Kepler is also an acronym, right? I can never remember what it is. Is it Kubernetes? Yeah, help me here, Niki, because I always, it always sounds cool when I hear it. It's super nerdy, but
yes, it's a great acronym. It's the Kubernetes based efficient power level exporter. So it exports the data to Prometheus. So you can then visualize those data, that data on uh, Grafana dashboard. And there are some talks out there and there are some really interesting data visualizations that you can gather that way.
It's a really interesting setup and you can really tell a story through that data. And that's, again, coming to the point on personas of who is this data useful for and for what? Is it for like a platform team that is doing cost and performance optimizations? Is it for SREs? Is it for software developers themselves who want to monitor the energy consumption of their software, like on a release, from one release to another, and how these have changed?
So really thinking about the persona in the story.
Chris Adams: okay, cool. And Ross, I understand that you've done a bit of work with Kepler as well, right? And you've also done, you've contributed some code to Scaphandre and some of the other ones here.
Ross Fairbanks: I've looked at it from that angle. I haven't looked at Kepler yet. But I think because the REPL measurements are at the CPU socket level, being able to assign those, well, first to the process, and then to the container, and especially in Kubernetes, namespace, then, like Niki says, you can provide much more context on what is this process actually doing. It's also one of the challenging parts as well because with Scaphandre and I think with Kepler as well, we have the individual process, but then we need to use the secret file system, um, to then work out which container was this and then can get up to the pod level. So that kind of mapping is quite difficult, but that extra context is really useful in those situations I think.
Chris Adams: So I can't code Rust, but I try to at least write the documentation for how some of this works. And if I understand what the two of you are saying is that tools like Scaphandre or tools like Kepler, they essentially allow you to figure out what share of a machine's usage should be attributable to a particular program.
If it's using half the power, then you can say half of it should go to that and that's how you might track it across a fleet of computers. And I think you folks also used this term REPL or RAPL. And I forget, this is a reference to the fact that certain computers, some have chips on them, which will basically share information about the actual energy being used.
So if you know that, say, the computer is using maybe 40 watts of power, and it's using half of it, you might allocate half of that 40 to one program. Is that the general idea that these things use? Or that's the kind of approach they tend to take?
Ross Fairbanks: Yes, yeah. RAPL is an Intel technology, and so that's the most commonly used. I think with Cloud Protocol, there's also an estimation model that can be used in cloud settings. This is one of the strange things where actually it's easier to do this on bare metal because then you can access RAPL, whereas doing it in the cloud because you haven't got access to the physical machine, it starts to get a little harder.
Chris Adams: Ah, okay. All right. Thanks for the sharing the extra nuance. I didn't know that Kepler could do that. That's really cool, to actually do that without having access to the computer under the hood. Okay, so, Ross, you mentioned, so I know that you've done a bit of work with Scaphandre and other tools like this, and I've been trying to understand how some of this works as well, and I think, as I understand it, these tools will basically, you've got two kind of parts here.
You have one part which essentially measures how much of a. machine is being used for a particular process, a particular program, and then there's this combination with this thing you mentioned before, RAPL, which I think is it running average power limit or something like that, and that essentially tells you what power is being used.
So if you know that a process is using half the compute in a computer, and you know it's using maybe 100 watts of power, then half of that 100 would be 50 watts, so over maybe a couple of hours, you would attribute half of the power to it like that. That's how RAPL works. And Nikki, you mentioned that Kepler does something like this, but it also has a model as well.
Niki Manoledaki: It has a model and I think also because RAPL is not accessible in a lot of cloud platforms in most of the workload types. For example, on AWS, most Institute instances don't give access to RAPL, and only the bare metal instances do, which also, side note, bare metal instances on AWS are more expensive than other easy to instances.
So there is a little bit of a catch there. And the Kepler power estimation model helps to limit some of that and estimate some of the power consumption. And we'll dig some of this documentation in episode notes.
Chris Adams: it. Okay, so that's how I understand the role that these two things play. And now that we understand that there's been an issue about actually having access to the power usage, because you might have an idea of some of these tools will tell you we're using 100% or 50%. But if you don't know what the actual number is, you're like, 50% of what?
Or something like that. That's one of the things we're struggling with. And as I understand it, this is probably some of the impetus behind some of this new work that we've seen with the real time compute standard from Adrian Cockcroft, where he's basically been saying, look, if we don't have the concrete numbers for electricity, it's gonna be really hard for us to work out the footprint of any of these tools.
And therefore, we need to have something like this. And this seems to be one of the new projects that was based around Kepler for this. That's, I believe that's my understanding, but Nikki, I wanted to ask if you've been exposed to any of this because I think there have been some conversations with people in the Kepler community about some of this or about figuring out where to go, is that correct?
Niki Manoledaki: I'm wondering if this is we have a demo from I think it's a scene from the Green Software Foundation on the 2nd of August. In the CNCF TAG, and we're going to be talking about Specification. I wonder if it's going to be about this because I haven't heard of it until now.
Ross Fairbanks: Yes, one of the things we can talk about with Kepler in there, the plan is I think to use Kepler for the attribution part. This part we were talking before about how we can go from the socket level to process and then to container and then up to pod. To use Kepler for that, because it's already performing that task when it's getting the metrics from RAPL. I also found it really interesting from the proposal because it goes into some of the security parts on why it's blocked on a lot of the cloud providers. And it's because if you can get very accurate energy measurements for like decryption algorithms, you can start to break the decryption. But I think the proposal has a really elegant solution, which is to expose all the metrics at one minute intervals, and if you've got per minute data, that's fine for doing carbon awareness, but people can't use it as like an attack vector.
Chris Adams: Okay, and coming from someone who's basically worked for Amazon for the last N years or Netflix, you would assume that there's some weight carried behind that, saying, yes, it is okay to provide minute level things, you're not going to get everyone hacked, yes, it's okay to use these tools.
Niki Manoledaki: I think it's called the platypus attack where some secrets can be inferred from power metrics. It's the platypus attack, if I'm not mistaken. Great name.
Chris Adams: Sounds about right, yeah. So yeah, there's a bunch of these as well, actually. I know there's one where people realize that you could actually use the flashing light on a disk access drive on a computer as a, if you know when it's flashing, that's an indication of when you're reading from a disk. And that is actually, that has been enough for people to carry out some attacks to break some encryption before.
So you can see why someone is going to be a bit reticent of this, but to actually then have someone say, I understand about security. One minute resolution is sufficient for us to keep people safe while still allowing people to report on meaningful figures is actually a big thing. And bear in mind that when this is coming from someone like, we'll share a link to a link, I think from 2007, where Adrian's writing about this, he's writing, there's a paper called Utilization is Virtually Useless as a Metric, talking about all the different things you need to take into account with cloud back in 2007.
So if almost at least 10 years later, we've actually got someone talking about this. That suggests that it has some substance to it, and we've actually got a real chance to come up with some meaningful metrics for this. Alright, we went down a massive nerd rabbit hole there, I think, folks. The next story proposed here was actually, this... There's some work in the IETF for people who are looking, who are curious about this. So the IETF, I believe it's the International Engineering Task Force. There's a current RFC, which is basically a proposal for creating a kind of carbon footprint header in HTTP requests. So this is currently being discussed.
And as I understand it, this was also an idea that was proposed. And there was even a talk. By at the Grafana Con recently, Nikki, I haven't seen this, but I wondered if you might know anything about this or if this has come up on your radar, because I know that Grafana ends up being used as the defacto dashboard in lots of places here.
Niki Manoledaki: This is the HTTP header that containing CO2 emissions has been on my radar for a while. And I only just realized that it was connected to Sentry software. So Bertrand Martin did a talk at GrafanaCon on reducing data center energy usage with Grafana. And so that's a really interesting use case. Again, looking at data center as a whole, where you have access to RAPL, you're not in a public cloud provider, you do have access to all of the data is at your disposal, and so there was, I think they reduced that at the data centers electricity usage by 15%.
Also, yeah, the temperature was increased from 18 degrees Celsius to 27 degrees and a lot of The power savings were achieved through this, and it's a really interesting use case. There was another talk that was featured at GrafanaCon, which was a talk by Chen Wang at IBM. She's also in the TAG. They were using Kepler to measure some of the workload's energy consumption.
And they also achieve, if I'm not mistaken, 75% power savings in their data center, some incredible numbers. And what both of these talks have in common, they do use Grafana dashboards to visualize those metrics. So I think there's a really interesting book on the power of storytelling. When you have that kind of data at your disposal that you didn't previously have, It can really tell a story that you can show to someone else and say, Hey, look at this dashboard that you just see how the energy consumption and temperature and CPU usage correlate with each other.
And I think it's fascinating, and I hope we see more of these visualizations.
Ross Fairbanks: I think just the part on cooling I think is really interesting. I went to actually talk at one of the KubeCons where there was someone, I think from the Open Compute Foundation was looking at it. Because also for waste heat as well, I think there's lots of potential things we can use for waste heat, for like district heating, those type of things as well. I think like heating, cooling as well as water usage, a couple of things that aren't sometimes looked at, we focus a lot on energy consumption. But there's other aspects as well I think are really important.
Chris Adams: Okay. That's quite a nice graceful link moving through to the fact that, okay, you can talk about energy efficiency all day long and uh, it sounds like there are ways to actually get access to this. And we've seen examples of talks about, okay, these are the things I can do by reducing the energy usage from this.
But there are other levers specifically around, effecting the carbon intensity of electricity if we're only going to look at carbon intensity without before looking at like changing the life cycle of hardware and stuff and Ross I think the next one is actually it's a link to a post that you shared here that I think helped explain some of the differences between the approaches people are currently taking when they do try to shift the carbon intensity of computing by either moving it through time or or moving it through space.
And you've been doing some work with a tool called Karmada that might not be that well known because most of the work happens outside of, there's a very significant community in China or other parts of the world, right?
Ross Fairbanks: So Karmada is a CNCF project, um, that does multi multi cluster scheduling for Kubernetes. So it's effectively a federation. So you have one Kubernetes cluster that's your control plane cluster, and then you can join multiple member clusters to it. And those, especially for carbon intensity, those member clusters could be in different regions, using different electricity grids. And they could be different cloud providers. And so the work that I was doing was creating a Kubernetes operator called the Carbon Aware Karmada Operator that gets a list of the clusters that are available and gets the carbon intensity for each of those locations. And it actually uses a Liquid Intensity Go project that you and I have worked on at the Green Web Foundation to get the metrics primarily from electricity maps that are used in their free tier. And then once you have the carbon intensity of those clusters, It then looks at the workloads, and you can say, I want to run these workloads in the two clusters, say out of three or four, that have the lowest carbon intensity. So that's the kind of high level of how Karmada works, and the operation of just adding carbon intensity onto what Karmada can already do.
Chris Adams: Okay, and I am aware there's another operator that was published by Microsoft which focused on moving things through time, not moving things through space. Is that correct?
Ross Fairbanks: Yes, this is what kind of referred to as temporal shifting rather than spatial shifting. And that's, temporal shifting is something I've been interested in for a long time. It's for jobs you have that aren't time sensitive. So the classic example of it is when you upload like a YouTube video, Google needs to transcode the recording, but it doesn't need to happen straight away, unless there's people actually waiting for it.
You can actually delay that, maybe even up to 24 hours, and people won't actually notice. And what the Carbon Aware Keda Operator does is it gets the carbon intensity forecast for an area, and then it sets the maximum number of replicas. So it's actually doing demand shaping. It's saying, depending on the carbon intensity, we want to run more or less of this workload.
Chris Adams: Okay, so that's one. And Karmada is doing space now. Now this sounds a little bit sci fi. Are we already doing time and space at the same time, or is this like the next frontier as it were?
Ross Fairbanks: Yes, this is the for the next frontier and the current for the work I'm doing with Karmada. It's a very simple kind of scheduling algorithm. It just uses the lowest carbon intensity. But what you could do is look at the forecast and say, actually, for the next two hours, I know the carbon test is going to be low, so I'll move this thing here. Whereas if you know from the forecast, the carbon density is about to increase. Maybe this isn't the right region. You can put it in another area. So I think as we get more into this topic, people will start doing more sophisticated scheduling.
Chris Adams: Okay, maybe this is a nice time to just jump into or refer to some of the things we saw in HotCarbon in that case Because I believe you shared a link to some work by the recent HotCarbon conference, which has its videos now visible I think there was a person called Dityaroop Maji this was related to the VMware stuff.
Maybe you could just expand on this one here, because there's a couple of other really nice talks from HotCarbon that it'd be nice to just refer to.
Ross Fairbanks: Yeah, so this is doing spatial shifting, but just applying it at a different layer. Yes, so this paper is from a team that were looking at the VMware global load balancer. And what they were looking at was, by default, the load balancer will route traffic to the closest data center. But they were also adding a carbon intensity module to say, can we actually reduce the emissions by routing it to a different data center? What's nice is the algorithm they're using also considers the location. If actually you're moving the data too far and it's going to impact performance, it takes into effect both carbon density and the location, which I really liked. And it's similar to areas we've been looking at, but just applying a different layer in the networking stack.
Chris Adams: Ah, I see. So, I've got a request coming in to visit a web page. Please generate the web page, but whoever's the greenest and closest to do this, so I can do it within a time limit, right? So it doesn't look like I'm slowing everything down. That's the general idea that it's doing it, right?
Ross Fairbanks: So you can include kind of the distance to pack acid travel as well, and I think considering the performance but also reducing emissions, and I think it was about 21% they found in the paper they could reduce the emissions by introducing this module.
Ross Fairbanks: Yes, yeah, although I should just include the caveat, it's a kind of a prototype that they're working on at the moment, but I think there's a lot of potential to use it in this area.
Chris Adams: Cool. All right. So this talk here is the first time I've seen someone speaking about getting rid of the assumption that you're looking at one computer and it might be that the actual resources you're using, like a disk or memory, might be physically a machine somewhere else because you've got a kind of disaggregated approach to data centers these days, rather than just having a single variant of a kind of desktop machine.
That's the key thing that I saw from it. Okay, so that concludes our deep dive into the wonders of cloud computing and Kubernetes. And if you have made it this way through, thank you for staying with us. We're just going to do a quick roundup of coming events that may be interesting to technologists who are looking at this.
Niki, I know there's a couple of events that you mentioned on the radar for you. Any chance you could refer to those or just give it add a quick reminder for people for these ones here?
Niki Manoledaki: There are a few events that we are organizing in the CNCF TAG. One of the main ones that we're preparing for at the moment is in October. We're planning the Global Week of Sustainability. So that's going to be events all over the world. I think we have a couple dozen cities represented at the moment.
Happening in the second week of October, I'll be talking about cloud native environmental sustainability in our local meetup groups, that's the CNCF meetup groups and yeah, find one near you or feel free to organize an event. We have a guide for local meetup organizers and that's very exciting. Another thing that is coming up is, uh, we do have demos and talks in the CNCF TAG, Environmental Sustainability Regular Meeting. So that's on every first and third Wednesday of the month at 5 p. m. Central European Time. And we do have a talk on the 2nd of August. By Asim [Hussain] from the Green Software Foundation, and we're going to be talking about some of the specifications and around measurements for carbon during that meeting.
And lastly, we do have a new working group for brain reviews that I wanted to give a shout out for and we're going to be meeting every second and fourth Wednesday of the month. Those meetings are open to everyone and in this working group we're going to be looking at evaluating the sustainability of various projects.
So Karmada and KEDA that Ross mentioned, for example. And so we're going to be looking at how to use Kepler on infrastructure that is available through the CNCF and how to set up those pipelines for measuring the carbon intensity of cloud native software and doing those assessments of cloud native tooling.
So that's very exciting.
Chris Adams: Cool. Thank you for sharing that. I will be showing links specifically to this so that if this is caught in one's eye, they'll see where to go to next. All right, so we've covered some of the events. We've gone into a super nerdy deep dive into the wonders of cloud computing, Kubernetes, and all the various ways you might measure that.
I think we just have to round up with some of the closing questions now. Chris, our producer, he throws these curveballs every single week. And this week there's been a bunch of hype in the news about the term Barbenheimer, this kind of portmanteau between Barbie and Oppenheimer, releasing on the same day.
Now, we've seen a few other portmanteaus, I know that Adrian Cockcroft has been pushing for DevSusOps, and if you look at the sustainable web movement, there's this term SustiWeb that's floating around. I wanted to see if either of you have any portmanteaus that you either love or hate in this field that might be worth sharing with others while we're here.
And I know there's at least one that's been shared here, so i'm not sure whose creation this one is, but maybe one of you might explain what hemigration is perhaps?
Ross Fairbanks: Yes, that would be me, yes. Staying in this rabbit hole we've been in today. Hemigration is moving applications between hemispheres. This is actually an idea that's in the GSF Carbon Awareness Docs, and it's about moving your applications to the hemisphere that has the most daylight hours to make the most of the solar power that's available.
And yeah, if you can move your workload to move it, I just like the idea of this, your applications moving with the seasons.
Chris Adams: Of course, it's like the opposite of chasing the moon, which is what people were talking about 10 years ago because we figured Because it's colder at night, you won't need so much heating. So the flip side now is, yes, it's warmer, but because there's more sun, the energy is going to be cleaner, right?
Ross Fairbanks: Yes, exactly, that's it.
Chris Adams: Okay, cool.
And I see another one which is Green Ops from... Okay, Niki, this is your suggestion, right?
Niki Manoledaki: Yeah, I don't know how common GreenOps is as a term. I haven't really heard this term be mentioned in the podcast. So far, correct me if I'm wrong, but GreenOps takes its name from DevOps and FinOps. So operations related to development or operations related to cost optimization. And the idea is to apply some of the strategies.
of FinOps for optimizing around carbon emissions and energy consumption. So that's GreenOps. It's a very loosely defined term in terms of what GreenOps looks like, what practices exist. Usually the idea is that if you reduce your resource utilization and if you implement FinOps practices you may be reducing the carbon that you emit through your infrastructure.
Chris Adams: That's your one, yeah? Yours is Green Ops. Okay, I think as I understand it, Google and ThoughtWorks are big proponents of this Green Ops term, and you'll see it in a bunch of their marketing, and their writing literature. I'm afraid I actually don't have a really good one myself, and I think, now that we actually have Fetch, I can't even make a joke about making Fetch happen.
I think I'll spare you, any of my particular kind of dad joke puns for day. But I, what I will say is thank you so much for coming onto this. I really enjoyed diving really into the depths of some of the specifics about how different tools make it possible to understand and optimize for carbon and optimize for energy use, like you mentioned here.
So yeah, thank you so much for coming on you two. I guess I'll see you folks in either the working groups or in the Slacks or in various other places. So just before I do go, I just want to check if people were interested in any of the things that you've discussed, Niki, where would you suggest people go?
Is there, if people want to find out more of the stuff you're doing, is there like one or two links that you would really draw people's attention to?
Niki Manoledaki: I would love to see people join the CNCF Slack channel for the TAG for environmental sustainability. That's where we have most of the communication. And we post a lot of links and blog posts that Ross shared and we organize through that channel. So that's our main form of communication.
Chris Adams: Cool. Thank you. And Ross, if there's anything that you would point people to, what would you point direct people's eyeballs to for this?
Ross Fairbanks: Yes, yeah. I direct them at the climateaction.tech Community, which I think Chris and I, you're both, we're both there as well. Especially the Green Room for Channel, which gets a lot of these kind of discussions, and it has, I use it, I find it really useful for researching these topics as well.
Chris Adams: Brilliant. I think that takes us to the end. This has been really fun. Thank you one more time. And that's all for this episode of the Week in Green Software. For all the resources in this episode, you can visit podcast.greensoftware.foundation to listen to more episodes of Environment Variables and see all the links that we mentioned and all the sites that we found.
See you in the next episode. Thanks a lot and bye for now.
Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show and of course, we'd love to have more listeners.
To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.