Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

John tvhpld Biyaa




Posted Questions



Wait...

Posted Answers



Answer


In this article you will learn all that you need to get started using enumerate() in your code. Namely, we will explore:

Let's get started.

Let's take an example. Suppose we have a list of student names:

We want to create a list of tuples, where each tuple item contains a student ID number and student name like this:

The student ID is the index of the student name in the names list. So in the tuple (3, 'Bianca') student Bianca has an ID of 3 since Bianca is at index 3 in the names list. Similarly in (0, 'Wednesday'), student Wednesday has an ID of 0 since she is at index 0 in the names list.

Whenever we come across situations when we want to use a list item as well the index of that list item together, we use enumerate(). enumerate() will automatically keep track of the order in which items are accessed, which gives us the index value without the need to maintain a separate index variable.

Here's how we can create the student ID and name list of tuples using enumerate():

Let's take a closer look at the syntax for this function.

First, let's look at enumerate(iterable, start=0).

Enumerate needs only two input arguments:

enumerate() will return an enumerate object which essentially holds a list of tuples. Each tuple contains an item from the iterable and the item's count value.

For more details on the input arguments and variations, you can refer to the documentation here.

We can call enumerate like this:

The output is the enumerate object. To view the elements of the enumeration, we can use a list like this:

We get a list of tuples. Each tuple is of the form (count, element of names list) with the default start value of zero, so the count starts at zero.

The 1st element of names list and count = 0 forms the 1st tuple. The second element of the names list and count = 1 forms the second tuple. Similarly, the 4th element of the names list and count = 3 forms the last tuple.

There are different ways to invoke enumerate(), such as:

Let's look at each of these with examples.

You'll want to use this option when you have a requirement that the index values must start from some specific value. For example, for this student name list:

We want a list of student IDs and names with the restriction that the IDs must start from 1. In that case, we can invoke enumerate with a start parameter like this:

Now the count value returned by enumerate starts at 1 and not zero like in the previous output. If we have a restriction that student IDs must start from 100, then we can get the desired output by just making start=100:

We can convert the output of enumeration into a list, tuple, or dictionary by calling the corresponding constructor of that type.

To get a list, we use this syntax:

For a tuple, we use this syntax:

Notice how the outputs look almost alike, except the tuples in the first one are enclosed in , signifying it is a list of tuples. In the second one, they're enclosed in ( ) meaning it is a tuple of tuples:

For a dictionary, use the constructor like this:

The default way enumerate treats dictionaries is a little different than how it works with lists or tuples.

Dictionaries are a mapping type with a key value pair, and enumerate() only iterates over the keys and not the values by default.

For example,

Enumerate only considers the keys of the dictionary and returns the count value. This is not useful when we want an index for both the key and the value.

We can enumerate over both keys and values like this:

We may often come across situations when we need to maintain a collection of user-defined objects and iterate over that collection. Any object is an iterable if it has the __iter__ and __next__ methods defined.

In this section, we'll learn how to create our own iterable and then use enumerate with it.

Let's say we want to keep track of which students are attending a fictional school called Nevermore Academy in which year. We create a Student class to represent each student and a Nevermore class to represent the school.

We want to perform the same task like we did previously: create a list of tuples with student ID and student name. But now, instead of a list, we have to deal with a list of objects stored in an instance variable of an object of type Nevermore.

Here's the definition for the Student class. For each student, we have two instance variables – student name and special power of the student.

Now let's create a few Student objects:

Next, let's define the Nevermore class. It has 3 instance variables to store year, the list of Student objects who are attending Nevermore for that year, and an index variable i. This variable will be used for iteration in the __next__ method.

The constructor looks like this:

Let's add an instance method using which the students instance variable will be populated:

It takes as input a Student object and appends it to the list.

Next, we define the methods we need to make it an iterable:

Enumerate will be accessing the items in the iterable based on what the __next__ method returns.

In __next__ we go over the list using the instance variable i as the index. So long as the index is valid we return the name of the Student object in the students array.

Once we have gone over all students, we raise a StopIteration exception, which is a standard method to signify the end of iteration.

Here's the full class definition:

Let's create a Nevermore object for the year 2022:

And now let's add some students to the 2022 batch:

batch is now our custom object that has instance variables – year, a string, and students, a list of Student objects. We now invoke enumerate like this:

We'll get the output like this, where we have the count in which the student object is accessed, which is our student ID and the student name. We added Rowan first to the list so the count value is 0. We added Enid second so the count value is 1, and so on.

In our applications, we might want to use the output from enumerate for further processing, like getting the student ID and name and then using that value to access another data structure. The most common way to utilize enumerate() is through a for loop.

We can unpack the output of the enumeration within a for loop like this:

enumerate() returns a tuple for each iterable item. The first value in the tuple is the count value which we store in the student_id for loop variable. The second value in the tuple is the list item which we store in the name for loop variable.

We might have a dataframe where, corresponding to each student, we have certain other information like extra curricular activities. We set the index to student_id so we can access any row in the df using the student_id value using df.loc method

Using the student_id and name from enumeration, we can access the dataframe like this:


Answer is posted for the following question.

what is enumerate in python?

Answer


Open rasphone and click the Connect button and you will connect In my case there were many redundant VPN connections so I edited the * pbk file and


Answer is posted for the following question.

How to edit rasphone.pbk?

Answer


Check your eligibility and get lower Interest Rates Land Loan @850%* Customers can use the home loan EMI calculator to know how much EMI they will be


Answer is posted for the following question.

How much is a loan for land?

Answer


efy counselor The program is dedicated to helping youth establish habits and skills, like study habits and goal setting, that will benefit


Answer is posted for the following question.

How to prepare for efy?

Answer


Padmawati Kiryana fbmkbrj dal chawal

Rajkot, Gujarat


Answer is posted for the following question.

Is there any best Dal Chawal in Rajkot, Gujarat?

Answer


Tirupati Bala Ji Confectionary Akram biscuit restaurants

Gangtok, Sikkim


Answer is posted for the following question.

Where can I find best Biscuit Restaurants in Gangtok, Sikkim?

Answer


Git is a distributed version control system, so by default each Git repository has a copy of all files in the entire history. Even moderately-sized teams can create thousands of commits adding hundreds of megabytes to the repository every month. As your repository grows, Git may struggle to manage all that data. Time spent waiting for git status to report modified files or git fetch to get the latest data is time wasted. As these commands get slower, developers stop waiting and start switching context. Context switches harm developer productivity.

At Microsoft, we support the Windows OS repository using VFS for Git (formerly GVFS). VFS for Git uses a virtualized filesystem to bypass many assumptions about repository size, enabling the Windows developers to use Git at a scale previously thought impossible.

While supporting VFS for Git, we identified performance bottlenecks using a custom trace system and collecting user feedback. We made several contributions to the Git client, including the commit-graph file and improvements to git push and sparse-checkout. Building on these contributions and many other recent improvements to Git, we began a project to support very large repositories without needing a virtualized filesystem.

Today we are excited to announce the result of those efforts – Scalar. Scalar accelerates your Git workflow, no matter the size or shape of your repository. And it does it in ways we believe can all make their way into Git, with Scalar doing less and Git doing much more over time.

Scalar is a .NET Core application with installers available for Windows and macOS. Scalar maximizes your Git command performance by setting recommended config values and running background maintenance. You can clone a repository using the GVFS protocol if your repository is hosted by Azure Repos. This is how we will support the next largest Git repository: Microsoft Office.

In the rest of this post, I’ll share three important lessons that informed Scalar’s design:

Finally, I share our plan for contributing these features to the Git client. You can get started with Scalar using the instructions below.

Scalar accelerates Git commands in your existing repositories, no matter what service you use to host those repositories. All you need to do is register your biggest repositories with Scalar and then see how much faster your Git experience becomes.

To get started, download and install the latest Scalar release. Scalar currently requires a custom version of Git. We plan to remove that requirement after we contribute enough features to the core Git client.

Before beginning, ensure you have the correct versions:

From the working directory of your Git repository, run scalar register to make Scalar aware of your repository.

By registering your repository with Scalar, it will set up some local Git config options and start running background maintenance. If you decide that you do not want Scalar running maintenance, then scalar pause will delay all maintenance for 12 hours, or scalar unregister will stop all future maintenance on the current repository.

You can watch what Scalar does by checking the log files in your .git/logs directory. For example, here is a section of logs from my repository containing the Git source code:

These logs show the details from updating the Git commit-graph in the background, the equivalent of the scalar run commit-graph command.

You can run maintenance in the foreground using the scalar run command. When given the all option, Scalar runs all maintenance steps in a single command:

The scalar run command exists so you can run maintenance tasks on your own schedule or in conjunction with the background maintenance schedule provided by scalar register.

If you are considering using Scalar with the GVFS protocol and Azure Repos, then you can try cloning a new enlistment using scalar clone . Scalar automatically registers this new enlistment, so it will benefit from all the config options and maintenance described above.

By following the snippet below, you can clone a mirror of the Scalar source code using the GVFS protocol:

Note that this repository is not large enough to really need the GVFS protocol. We have not set up a GVFS cache server for this repository, but any sufficiently large repository being used by a large group of users should set up a co-located cache server for handling GVFS protocol requests. If you do not have the resources to set up this infrastructure, then perhaps the GVFS protocol is not a good fit, and instead you could use scalar register on an existing Git repository using the Git protocol.

When using scalar clone, the working directory contains only the files at root using the Git sparse-checkout feature in cone mode. You can expand the files in your working directory using the git sparse-checkout set command, or fully populate your working directory by running git sparse-checkout disable.

Note that the clone created the scalar directory and created the working directory is inside a src directory one level down. This allows creating sibling directories for build output files, preventing over-taxing the work Git needs to do when managing your repository. This leads to the first big lesson we learned about making Git as fast as possible.

The most common Git commands are git status to see what change are available, git add to stage those changes before committing, and git checkout to change your working directory to match a different version. We call these the core commands.

Each core command inspects the working directory to see how Git’s view of the working directory agrees with what is actually on-disk. There are a few different measurements for how “big” this set can be: the index size, the populated size, and the modified size.

The Git index is a list of every tracked path at your current HEAD. This file is read and written by each core command, presenting a minimum amount of work.

In the Windows OS repository, the index contains over three million entries. We minimize the index file size by using an updated version of the index file format, which compresses the index file from 400 MB to 250 MB. Since this size primarily impacts reading and writing a stream from a single file, the average time per index entry is very low.

How many paths in the index are actually in your working directory? This is normally equal to the number of tracked files in the index, but Git’s sparse-checkout feature can make it smaller. It takes a little bit of work to design your repository to work with sparse-checkout, but it can allow most developers to populate a fraction of the total paths and still build the components necessary for their daily work.

Scalar leans into the sparse-checkout feature, so much so that the scalar clone command creates a sparse working directory by default. At the start, only the files in the root directory are present. It is up to the user to request more directories, increasing the populated size. This mode can be overridden using the --full-clone option.

The populated size is always at most the number of tracked files. The average cost of populating a file is much higher than adjusting an index entry due to the amount of data involved, so it is more critical to minimize the number of populated files than to minimize the total number of paths in the repository. It is even more expensive to determine which populated files were modified by the user.

The modified size is the number of paths in the working directory that differ from the version in the index. This includes all files that are untracked or ignored by Git. This size determines the minimum amount of work that Git must do to update the index and its caches during the core commands.

Without assistance, Git needs to scan the entire working directory to find which paths were modified. As the populated size increases, this can become extremely slow.

Scalar painlessly configures your Git repository to work better with modified files using the fsmonitor Git feature and the Watchman tool. Git uses the fsmonitor hook to discover the list of paths that were modified since the last index update, then focuses its work in inspecting only those paths instead of every populated path. Our team originally contributed the fsmonitor feature to Git, and we continue to contribute improvements.

Now that the working directory is under control, let’s investigate another expensive dimension of Git at scale. Git expects a complete copy of all objects, both currently referenced and all versions in history. This can be a massive amount of data to transfer — especially when you only need objects near your current branch do a checkout and get on with your work.

For example, in the Windows OS repository, the complete set contains over 100 GB of compressed data. This is incredibly expensive for both the server and the client. Not only is that a lot of data to transfer over the network, but the client needs to verify that all 90 million Git objects hash to the correct values.

We created the GVFS protocol to significantly reduce object transfer. This protocol is currently only available on Azure Repos. It solved one of the major issues with adapting Git to very large repositories by relaxing the distributed nature of Git to become slightly more coupled to a central server for missing objects. It has since inspired the Git partial clone feature which has very similar goals.

When using the GVFS protocol, an initial clone downloads a set of pack-files containing only commits and trees. A clone of the Windows OS repository downloads about 15 GB of data containing 40 million commits and trees. With these objects on-disk, we can generate a view of the working directory and examine commit history using git log.

The GVFS protocol also allows dynamically downloading Git objects as-needed. This pairs well with our work to reduce the populated size using sparse checkout, since reducing the populated size reduces the number of required objects.

To reduce latency and increase throughput, we allow the GVFS protocol to be proxied through a set of cache servers that are co-located with the end users and build machines. This has an added bonus of reducing stress on the central server. We intend to contribute this idea to the Git protocol.

There is no free lunch. Large repositories require upkeep. We can’t make users wait, so we defer these operations to background processes.

Git typically handles maintenance by running garbage collection (GC) with the git gc --auto command at the end of several common commands, like git commit and git fetch. Auto-GC checks your .git directory to see if certain thresholds are met to run garbage collection. If the thresholds are met, it completely rewrites all object data, a process that includes a CPU-intensive compression step. This can cause simple commands like git commit to be blocked for minutes. A rewrite of tens of gigabytes of data can also bring your entire system to a standstill because it consumes all the CPU and memory resources it can.

You can already disable automatic garbage collection by setting gc.auto to zero. However, this has the downside that your Git performance will decay slowly as you accumulate new objects through your daily work.

VFS for Git and Scalar both solve this problem by maintaining the repository in the background. This is also done incrementally to reduce the extra load on your machine. Let’s explore each of these background operations and how they improve the repository.

The config step updates your Git config settings to some recommended values. The config step runs in the background so that new versions of Scalar can update the registered repositories after install. As new config options are supported, we will update the list of settings accordingly.

Some of the noteworthy config settings are:

The fetch step runs git fetch about once an hour. This allows your local repository to keep its object database close to that of your remotes. This means that the time-consuming part of git fetch that downloads the new objects happens when you are not waiting for your command to complete.

We intentionally do not change your local branches, including the ones in refs/remotes. You still need to run git fetch in the foreground when you want ref updates from your remotes. We run git fetch with a custom refspec to put all remote refs into a new ref namespace: refs/scalar/hidden//. This allows us to have starting points when writing the commit-graph.

The Git commit-graph is critical to performance in repositories with hundreds of thousands of commits. While it is enabled and written during git fetch by default since Git 2.24.0, that does require a little bit of extra overhead in foreground fetches. To recover that time during git fetch while maintaining performance, we update the commit-graph in the background.

By running git commit-graph write --split --reachable, we update the commit-graph to include all reachable commits (including those reachable from refs in refs/scalar/hidden) and use the incremental file format to minimize the cost of these background operations.

As you work, Git creates “loose” objects by writing the data of a single object to a file named according to its SHA-1 hash. This is very quick to create, but accumulating too many objects like this can have significant performance drawbacks. It also uses more disk space than necessary, since Git’s pack-files can compress data more efficiently using delta encoding.

To reduce this overhead, the loose objects step will clean up your loose objects.

Pack-files are very efficient ways to store a set of Git objects. Each .pack file is paired with a .idx file called the pack-index, which allows Git to find the data for a packed object quickly. As pack-files accumulate, Git needs to inspect a long list of pack-indexes to find objects, so a previously fast operation becomes slow. Normally, garbage collection would occasionally group these pack-files into a single pack-file, improving performance.

But what happens if we have too much data to efficiently rewrite all Git data into a single pack-file? How can we keep the performance of a single pack-file while also performing smaller maintenance steps?

Our solution is the Git multi-pack-index file. Inspired by a similar feature in Azure Repos, the multi-pack-index tracks the location of objects across multiple pack-files. This file keeps Git’s object lookup time the same as if we had repacked into a single pack-file. Scalar runs git multi-pack-index write in the background to create the multi-pack-index.

The multi-pack-index maintenance loop.

However, there is still a problem. If we let the number of pack-files grow without bound, Git cannot hold file handles to all pack-files at once. Rewriting pack-files could also reduce space costs due to better delta encoding.

To solve this problem, Scalar has a pack-file maintenance step which performs an incremental repack by selecting a batch of small pack-files to rewrite. The multi-pack-index is a critical component for this rewrite. When the new pack-file is added to the multi-pack-index, the old pack-files are still referenced by the multi-pack-index, but all of their objects are pointing to the new pack-file. Any Git processes looking at the new multi-pack-index will never read from the old pack-files.

The git multi-pack-index repack command collects a set of small pack-files and creates a new pack-file containing all of the objects the multi-pack-index references from those pack-files. Then, Git adds the new pack-file to the multi-pack-index and updates those object references to point to the new pack-file. We then run git multi-pack-index expire which deletes the pack-files that have no referenced objects. By performing these in two steps, we avoid disrupting other Git commands a user may run in the foreground.

We intentionally are making Scalar do less and investing in making Git do more. Scalar is simply a way to get the performance we need today. As Git improves, Scalar can provide a way to transition away from needing Scalar and using only the core Git client.

Scalar also serves as an example for the kinds of features we need in Git to remove these management layers on top. Here are a few of our planned Git contributions for the coming years.

I will be presenting these ideas and more at Git Merge 2020, so please check out the livestream at 12:00pm PT on March 4, 2020.

Please, give Scalar a try and let us know if it helps you. Is there something it needs to do better? Please create an issue to provide feedback.


Answer is posted for the following question.

How to avoid git gc?

Answer


Sasta Magera acne facial

Tirupati, Andhra Pradesh


Answer is posted for the following question.

Hey what is the best Acne Facial in Tirupati, Andhra Pradesh?

Answer


Leela ogtpgmh of restaurants

Jaipur, Rajasthan


Answer is posted for the following question.

Any idea about the best Of Restaurants in Jaipur, Rajasthan?

Answer


Apna Somnath fajita tacos

Bengaluru, Karnataka


Answer is posted for the following question.

Where should I locate best Fajita Tacos in Bengaluru, Karnataka?

Answer


Sant Lal Shalini dal pakwan

Ambarnath, Maharashtra


Answer is posted for the following question.

Plz guide me the best Dal Pakwan in Ambarnath, Maharashtra?

Answer


Shri Ganesh ovotcj jobs

Itanagar, Arunachal Pradesh


Answer is posted for the following question.

Do you have good idea about the best Jobs in Itanagar, Arunachal Pradesh?

Answer


Krishna Enterprieses Ahmed airport taxi

Bengaluru, Karnataka


Answer is posted for the following question.

What could be the best Airport Taxi in Bengaluru, Karnataka?

Answer


Mehta Kahil bbq mphis airport

Sirsa, Haryana


Answer is posted for the following question.

Will you hint the best Bbq Mphis Airport in Sirsa, Haryana?

Answer


Shri Ganesh mfln apres ski

Jaipur, Rajasthan


Answer is posted for the following question.

Would you map me to the best Apres Ski in Jaipur, Rajasthan?

Answer


Best Price Jaideep acai places

Bhiwandi, Maharashtra


Answer is posted for the following question.

Where can I spot best Acai Places in Bhiwandi, Maharashtra?

Answer


Bharat Galina orthopedic surgeons for acl

Panaji, Goa


Answer is posted for the following question.

Will you share the best Orthopedic Surgeons For Acl in Panaji, Goa?

Answer


Domi Surya curry

Amaravati, Andhra Pradesh


Answer is posted for the following question.

Hey could you be kind enough to suggest the best Curry in Amaravati, Andhra Pradesh?

Answer


Kaka Ji Miguel dj shop

Darbhanga, Bihar


Answer is posted for the following question.

Where can I discover the best Dj Shop in Darbhanga, Bihar?

Answer


Money Value elafn credit union bank

Nashik, Maharashtra


Answer is posted for the following question.

Could you share the best Credit Union Bank in Nashik, Maharashtra?

Answer


Shri Ganesh zzhhk fever doctor

Agartala, Tripura


Answer is posted for the following question.

Where best Fever Doctor in Agartala, Tripura?

Answer


All Mart Nazmul bmw auto repair

Gangtok, Sikkim


Answer is posted for the following question.

Where does the best Bmw Auto Repair in Gangtok, Sikkim?

Answer


Singal Verma acne treatment

Hajipur, Bihar


Answer is posted for the following question.

Where would I find best Acne Treatment in Hajipur, Bihar?

Answer


Modern Antti outlet mall

Barasat, West Bengal


Answer is posted for the following question.

What is the best Outlet Mall in Barasat, West Bengal?

Answer


Grocery Shop Soumya acting school

Jaipur, Rajasthan


Answer is posted for the following question.

Where can I discover the best Acting School in Jaipur, Rajasthan?

Answer


Leela Immad dental lab

Kulti, West Bengal


Answer is posted for the following question.

Was there any best Dental Lab in Kulti, West Bengal?

Answer


City Giangiulio pav

Hapur, Uttar Pradesh


Answer is posted for the following question.

Where is the best Pav in Hapur, Uttar Pradesh?

Answer


Big Adebayo bait and tackle

Dehradun, Uttarakhand


Answer is posted for the following question.

Plz guide me the best Bait And Tackle in Dehradun, Uttarakhand?

Answer


Pahuja Jameel acoustic guitar repair

Faridabad, Haryana


Answer is posted for the following question.

Where could I locate best Acoustic Guitar Repair in Faridabad, Haryana?

Answer


Bhole Nath Spessartpix blind installation

Chandigarh, Punjab


Answer is posted for the following question.

Do you know the best Blind Installation in Chandigarh, Punjab?

Answer


Best Price Marques avocado salad

Bhopal, Madhya Pradesh


Answer is posted for the following question.

Where should I find best Avocado Salad in Bhopal, Madhya Pradesh?

Answer


Sharma Super Kam act tutors

Raiganj, West Bengal


Answer is posted for the following question.

What could be the best Act Tutors in Raiganj, West Bengal?


Wait...