How do photo editors know what’s behind a person when removing them in an image?

573 views

For example, cropping off a person in a picture and you see a wall that was hidden behind them in the original picture.

In: Other

5 Answers

Anonymous 0 Comments

Related to this, if I take a photo of a group and feel I may need to edit later, I always take ‘blanks’ before and after. ‘Blanks’ being shots without subjects.

Anonymous 0 Comments

That’s not how it works. A camera cannot capture details that are “behind” a person. If you see a person (or any object) removed from a photo, that entire space of the image is removed. If it’s not replaced by something else (the wall in your example) the spot will just be a white space like a blank painting canvas would be. The wall that’s there in the finished edit is “cloned” from another section of the image into the white space (e.g. copy and pasted).

Anonymous 0 Comments

They guess based on the surroundings of the crop because there is no actual data for what’s behind it.

The _how_ is very complicated and these days involves lots of machine learning, but the general idea is that they look for patterns near the area being removed and infer details based on the patterns.

Old algorithms would try to continue nearby patterns across the removed area. This worked pretty well when the background was a simple repeating pattern, like brickwork or grass, but did really poorly when important things were obviously changing behind the removed object.

With machine learning, the model can infer the existence of more complicated backgrounds given clues. For example, if it sees a doorframe surrounding a person that you want to remove, it can infer the existence of a door behind the person and fill in details accordingly. It “knows” about doors and doorframes because it was trained on many pictures with doors in doorframes and has stored that information in some format during its training.

Machine learning models do not “understand” photos the same way that humans do, so the exact way that this happens is poorly understood and often unreliable. It generally creates better results than earlier “copy a pattern” algorithms did.

Anonymous 0 Comments

They don’t. They just guess. That’s usually not a problem because the people looking at the edited picture later mostly likely don’t know what should be there either. So it just needs to look plausible.

Anonymous 0 Comments

[removed]