Collectibles
Image Tools
Other industries
Explore Ximilar image recognition services. Get quick answers to common questions about our ready-to-use API and models.
How fashion tagging with AI works?
Fashion Tagging is a visual AI service that automatically recognizes fashion products in images, providing their category, subcategory, and various tags. An optional meta endpoint tags the background, scene, view, and body parts of the person wearing the items, which helps with the selection of images for the product listings and galleries.
Your fashion image is processed by a group of object detection and image recognition models working together. First, all fashion apparel and accessories are detected. Then, each item is categorized and tagged accordingly (e.g., shoes are tagged for sole, heel, material, and type). The main color of each recognized object is also extracted by default, eliminating the need for a separate color extraction model unless detailed color analysis is required.
Most fashion e-shops and comparators use Fashion Search, which combines Fashion Tagging with Object Detection and Product Recommendations (visual search). To use Fashion Search, your collection is synchronized to Ximilar cloud, where each picture is analyzed only once and then discarded. You will get categories, tags, colors, and similar items to each of your images under one API request.
Where do I find Ximilar fashion taxonomy?
The fashion taxonomy of Ximilar is public in several places. You can:
To what kind of images is Fashion Tagging applicable?
Automated Fashion Tagging is used on fashion product images of e-shops, price comparators, fashion brands, and specialized collections. It is based on numerous image recognition tasks trained to recognize separate product categories and object detection models. That is why it works on both single product images and more complex images, including user-generated content or social media images.
Can I try how Ximilar fashion solutions work on my photos?
Yes! Ximilar has a free and public Fashion Tagging demo. You can either upload images or their URLs and see for yourself how automatic fashion tagging works.
You can also use Fashion Tagging in our App. See our Pricing for details. If you have large volumes of images to be processed every month or need customization, contact us to discuss a custom plan.
What are the differences between Ximilar fashion solutions for recognition, data enrichment and product search?
Fashion Tagging labels your fashion items, assigning categories (e.g., skirts), subcategories (e.g., A-line skirts) and tags (for color, design, pattern, length, rise, style…). By default, it provides data for one main object in an image. Endpoint meta can also provide tags for the photography background, scene, or body part in the fashion image.
Fashion Search is an all-encompassing solution, wrapping all typical fashion AI services into one. It integrates:
Both Fashion Tagging and Fashion Search include color analysis. The colors are supplied as tags and can serve for filtering and search on your website.
I only need a single fashion AI solution
All our fashion AI solutions can also be employed individually. Examples include product similarity, search by photo (reverse image search), fashion apparel detection, or color-based search.
Go to:
Read more:
Can Fashion Tagging detect and analyze multiple items in an image?
Fashion Tagging can be combined with object detection to categorize and tag individual items in a more complex fashion image. That is why our solution Fashion Search automatically detects apparel, footwear and accessories in your images, provides tags, and finds the most similar products or images.
These fashion services work on both product images and real-life photos, e.g., fashion influencer pictures. Endpoint meta can also optionally provide tags for background, scene, view, and body part of the person wearing the items.
What attributes or features does deep product tagging recognize?
Fashion Tagging is one of our most complex ready-to-use services. It works with over a hundred recognition tasks, hundreds of labels, and dozens of fashion attributes.
It identifies the top category of product (e.g., accessories, bags, jewellery, watch, clothing, underwear, footwear), then the category (e.g., accessories/belts), and its features such as colour, design, material, or pattern.
Customization
There is also an optional meta endpoint for tagging of background, scene, view, and body part of the person wearing the items.
If you miss any important attributes, the taxonomy can be adjusted to fit your use case. It can also be used in different languages than English.
Level up with Fashion Search
Furthermore, our service Fashion Search combines deep tagging with object detection to ensure all fashion items in an image are tagged. The detected objects are then automatically used for similarity search in your collection.
Is it possible to change, rename, and add tags, or use my own fashion taxonomy?
With Ximilar, you can customize the tagging taxonomy:
The first steps could be:
If you do not find the attributes you need, contact us to modify the service to fit your use case.
You can also use our Computer Vision platform to train your own custom categorization and tagging models and combine them with ready-to-use solutions.
What colors and which palettes can Ximilar AI extract? What is the format of the results?
We offer multiple options for dominant color extraction that you can select from. The outcome is provided in a structured format, usually JSON.
Dominant product vs. whole image
The product endpoint allows you to extract colors from a single dominant object in an image (product photo), whereas the generic endpoint extracts the dominant colors from the entire image, a mode typically used in stock photography.
Basic color for searching & filtering
This mode identifies one main color of the dominant object out of a total of 16 basic colors. The extracted color can be utilized as an attribute for filtering and searching fashion items.
Pantone palette: detailed color analysis
This mode provides a group of dominant colors, their hex codes, the closest Pantone name, and coverage of the image in %. It is ideal for similarity search (search by color).
What is automated Home Decor & Furniture Tagging? How does it work?
Automated Home Decor & Furniture Tagging is a visual AI service that automatically recognizes categories and sub-categories in furniture or home decor product images, and provides tags describing the main products.
What kind of images automated Home Decor & Furniture Tagging works with?
The automated Home Decor & Furniture Tagging works mainly with home decor and furniture product images from price comparators, sellers, hotels, architectural studios, designers, and specialized collections. You can try how it works on your images in a public demo.
What attributes or features does Home Decor & Furniture Tagging recognize?
This service categorizes and tags the dominant home decor or furniture item in the image. It identifies the top category of image (all rooms, bathroom, bedroom, kitchen), then the category (e.g., bedroom/duvet covers), and its features such as colour, shape, pattern, and material.
Where do I find Ximilar Home Decor & Furniture Tagging taxonomy?
Can Home Decor & Furniture Tagging detect and analyze multiple items in an image?
This service was created to work mainly with product images, therefore it categorizes the dominant product in the image, based on an image recognition task. It can however be combined with a custom object detection task to detect specific furniture pieces or decorations and then analyze them separately. Feel free to contact us to discuss a custom solution.
Can I try how the automated Home Decor & Furniture Tagging works on my photos?
Yes! Ximilar has a public Home Decor & Furniture Tagging demo. You can either upload images or their URLs, and see for yourself how it works. You can also use this service in our App. The Home Decor & Furniture Tagging is available in all pricing plans. If you have large volumes of images to be processed every month or need customization, contact us to discuss a custom plan.
Go to:
Can I rename, change or use my own tags in automatic Home Decor & Furniture Tagging?
Custom taxonomies can be applied through taxonomy mapping. The tagging services can also be easily switched to different languages. We can also replace or add tags in accordance with your needs.
We recommend trying how the service works in our public demo & App and checking the API documentation including full taxonomy. If you do not find the attributes you need, contact us to modify the service to fit your use case.
You can also use our Computer Vision platform to train your own custom categorization and tagging models and combine them with ready-to-use solutions.
I need to get the Home Decor & Furniture Tagging results in my own taxonomy. Can you do that?
With Ximilar, you can both use your own taxonomy or get the results of Home Decor & Furniture Tagging in your own language. The first solution is achieved by mapping your taxonomy to ours. The second one is done by translating the taxonomy into your language. Contact us to set up a custom profile.
Read more in the recommended FAQs.
How Dominant Colors work? What is analyzing dominant colors good for?
Dominant Colors is a visual AI service extracting the most prevailing colors from images. You can choose one of the two ways to use this service, depending on whether you need to analyze generic photos (real life and stock photos) or product photos.
The endpoint for generic photos detects up to 6 dominant colors (covering the most area) from the whole image, without modifying it. This endpoint is more suitable for stock photos or real-life photos where you need the entire picture to be analyzed, not only the foreground object.
The endpoint for product photos, in addition, contains a background removal task, after which it analyzes the 6 dominant colors of the foreground object and picks 3 major colors (covering the largest area). The product color endpoint is great for product photos, where one dominant item is in the picture.
Both endpoints return one or more dominant colors in three formats: RGB number values, RGB hex, CIE Luv and name according to CSS3 color standard and Pantone color naming.
How many colors do the Dominant Colors analyze?
The basic palette of 16 basic colors is useful for tagging, sorting and filtering products or pictures in e-shops and on comparison websites. The Dominant Colors with this basic setting are included in our Fashion Tagging service.
The results of the advanced colour analysis are provided as a group of colors on the Pantone color palette. You get their exact Hex code, the name of the closest color in this palette, and the percentage of the area they cover. This way is ideal for similarity & visual search solutions, where you need to know the exact colors.
How do I use service Dominant Colors?
All of our services can be used through an App or via API, and separately or as a part of a more complex solution assembled together via Flows.
In our App, you can find the Dominant Colors under the Ready-to-use Image Recognition services. It is available to all pricing plans, including Free. You can upload images with URLs or by drag-and-drop.
You can also text the service in our public demo.
Can I get an area (%) covered by colors in the image?
Yes, you can. Check the API documentation and see how it works in our App.
Go to:
What is the difference between product and generic color endpoint?
The product endpoint is suitable for product photos (with a solid, more or less homogenous, background). This endpoint first tries to remove this background, and then the colours are extracted from the dominant object (foreground object).
The generic endpoint, on the other hand, analyses all pixels from the image, independent of the objects in it.
What colors and which palettes can Ximilar AI extract? What is the format of the results?
We offer multiple options for dominant color extraction that you can select from. The outcome is provided in a structured format, usually JSON.
Dominant product vs. whole image
The product endpoint allows you to extract colors from a single dominant object in an image (product photo), whereas the generic endpoint extracts the dominant colors from the entire image, a mode typically used in stock photography.
Basic color for searching & filtering
This mode identifies one main color of the dominant object out of a total of 16 basic colors. The extracted color can be utilized as an attribute for filtering and searching fashion items.
Pantone palette: detailed color analysis
This mode provides a group of dominant colors, their hex codes, the closest Pantone name, and coverage of the image in %. It is ideal for similarity search (search by color).
What is the difference between Product Similarity, Search by Photo, Photo Similarity, Fashion Search, Home Decor Search and Custom Visual Search?
Product Similarity (visual product search) was built for e-commerce. It is useful for finding similar pictures of products with image queries, similar product recommendations, and product matching.
Search by Photo (product search by image) combines product similarity with object detection to provide similar pictures specifically to the detected object, such as a piece of fashion apparel. It can be used in reverse search engine for fashion, home decor, and other e-commerce product search engines.
Photo Similarity (similar photo search) works with the same technology, but it was trained for generic images, such as stock photos or real-life images.
Fashion Search is a specialized service for fashion e-commerce, which combines visual & similarity search with object detection (Search by Photo) and Fashion Tagging.
Home Decor Search works in the same way in the field of home decor and furniture photos. It also combines visual & similarity search with object detection (Search by Photo) and Furniture & Home Decor Tagging.
Custom Visual Search refers to all solutions using visual & similarity search we build from scratch for our customers.
Does Ximilar provide tagging services in other languages than English?
The default language of Ximilar’s tagging services is English, and they can be easily switched to other languages. Our Fashion Tagging is already available in Spanish. Other languages can be added on request. We can also simply replace selected tags with the tags of your choice. Read more in recommended FAQs & API documentation, and don’t hesitate to contact us to discuss your goals.
What is AI Recognition of Collectibles and how does it work?
AI Recognition of Collectibles is a service created for websites and apps for collectors. It automatically detects and recognizes collectible items, such as cards, coins, banknotes, or stamps.
The service is fully customizable for different types of collectibles. For example, let’s say you are building an app for the automatic recognition of baseball cards. We would use the basic service and add a precise recognition of different cards based on their images, texts or packaging.
We can add tasks that will recognize edition, year, symbols or texts on the collectible items and provide you with tags, that can be used as keywords for search and filtering of items on your website.
Additionally, this service can be combined with other solutions, for example:
Go to:
Read more:
Which collectibles can AI Recognition of Collectibles recognize?
As for now, the service is able to detect (and mark by bounding boxes) the collectibles such as stamps, coins, banknotes, comic books and trading cards, as well as antique items.
For collectible cards, the service can identify whether it is a Trading Card Game (Pokémon, Magic The Gathering – MTG, Yu-Gi-Oh!, Lorcana, Flesh and Blood and so on) or a Sports Card (Baseball, Basketball, Hockey, Football, Soccer, or MMA), with several additional features (e.g., signature). It can be easily customized to evaluate images based on your criteria. The full taxonomy of the identification results with all supported games and sports can be found in our documentation page.
For comic books, the service can identify more than 1 million magazines, books and manga – via name, title, publisher, issue number and release date.
The service is constantly expanding based on the requests from our customers.
Can your AI identify or find the exact collectible based on a photo?
Yes, it can. We will create a customized visual search. After that, you will be able to search in your database with image queries or recommend similar items. The visual search will be independent of the origin, resolution, or quality of colours of your images.
The system works via REST API and is able to scale to hundreds of requests per second.
What is the difference between AI Recognition of Collectibles and Custom Visual Search features?
AI Recognition of Collectibles is a basic AI system for detecting and analyzing images of collectibles, such as trading card games, sports cards, coins, stamps, or antique items. It can be combined with a custom visual search solution to find images in your collection based on a query image, recommend similar items, or match and eliminate duplicates in item galleries. This system is always customized for specific customers’ needs.
Custom Visual Search, on the other hand, refers to any custom or customized solution built with our visual search platform. Contact us to discuss your application.
Go to:
Read more:
How does the automatic visual inspection of collectibles work?
Visual inspection systems powered by AI depend on the type of data. We will develop a custom system based on your use case. To do so, we will need a dataset of training images from you (representing the images you typically work with). Then the system will be able to detect signatures or package, and analyze scratches or edges of the item. Contact us to discuss your use case.
Can AI Recognition of Collectibles read a text (OCR) or a score from a graded collectible item?
OCR is not a part of the basic service at this moment. If you are interested in it, contact us.