Recent comments in /f/MachineLearning

science-raven OP t1_jbvlacq wrote

There's two types of weeding, the most common is the huge quantities of seedlings that come up in new soil. That's dizzyingly easy for a human, and it's not too difficult for a robot. It's too repetitive for a human. The difficult types of weeds are those that have to be drilled, because humans don't have drills and mapping ability.

Drilling soil using an auger is actually a back and forth movement using just one vector on a single motor. The arm has at least 1 force sensor in the tool piece.

1

jonnyyen t1_jbvhdvn wrote

Nice to see a python implementation of deLong's method - I've had to use pROC (in R) for that in the past. For binary event analysis (among other things) there's also https://github.com/drsteve/PyForecastTools, which also has bootstrapped confidence intervals, or analytic CI using Wald or Agresti-Coull. The terminology is from weather literature, but it covers a lot of the same ground.

5

Safe-Celebration-220 t1_jbuswo7 wrote

I think it’s because humans have multiple neural networks that connect together. Humans have a neural network to understand sight, smell, sound, touch, and taste that all are combined to create one neural network that is interconnected within itself. If you took a human brain that had no experience of anything, and could not process any of the 5 senses and was only able to process language than it would take that human brain billions of texts in order for it to be able to write sentences down. If GPT-3 had a neural network for all 5 senses and you taught it information based off of all of those senses than it could make connections that were previously impossible. GPT-3 will take a word and connect that word with other words to see how it fits into the context of a sentence but humans will take a word and see how that word connects with each and every sense and that takes less information to learn. A human can learn language faster by taking the things they see and connecting them with what the language they are learning. If a human could not connect their sight with language than learning that language becomes much much harder. So the challenge we face right now is learning to connect neural networks with other neural networks in the same way a human connects their neural networks.

1

Real_Revenue_4741 t1_jbuhm8y wrote

In essence, "interacting with an object with an end effector" requires a lot of precision. It is more difficult than it seems to get it working on all types of weeds/plants. Weeding/digging requires a specific motion that may be difficult to accomplish without tactile feedback--it is not as simple as putting the tool at the right location. Irrigation may be easier because there is not much interaction with the environment required. It will be pretty simple to get a system that works with suboptimal performance, but this would be not be enough to automate gardening without human intervention.

1

MrTacobeans t1_jbu7nw4 wrote

Regardless of the possibilities of this being possible this robot even at economies of scale looks more like 15-20k. Advanced lawn roombas are already in the 2-5k range a fully autonomous lawn/garden maintenance bot will never be below 5k... Unless it's just running around spraying water on stuff. The arm in your video alone would likely cost atleast 2-3k after R&D is factored in

1

pyepyepie t1_jbu75ec wrote

Let's agree to disagree. Your example shows random data while I talk about how much of the information your plot actually shows after dimensionality reduction (you can't know).

Honestly, I am not sure what your work actually means since the details are kept secret - I think you can shut my mouth by reporting a little more or releasing the data, but more importantly - it would make your work a significant contribution.

Edit: I would like to see a comparison of the plot with a very simple method, e.g. mean of word embeddings. My hypothesis is that it will look similar as well.

11

Simusid OP t1_jbu5594 wrote

Actually the curated dataset (ref github in original post) is almost perfectly balanced. And yes, sentence embeddings is probably the SOTA approach today.

I agree that when I say the graphs "seems similar", that is a very qualitative label. However I would not say it "means nothing". At the far extreme if you plot:

x = UMAP().fit(np.random.random((10000,75)))
plt.scatter(x.embedding_[:,0], x.embedding_[:,1], s=1)

You will get "hot garbage", a big blob. My goal, and my only goal was to visually see how "blobby" OpenAI was vs ST. And clearly they are visually similar.

4

science-raven OP t1_jbu4m9q wrote

If you spend a moment on YT to see the latest projects, there are quadcopters that pick apples, and many awesome fruit picking demonstrations. AI is fanning out into many fields.

Technologies can come late because they have been missed: electronic cigarettes could have existed since the 1930's when propylene glycol was used for medicines.

For the grit, yes it's tricky, the robot can ask for a brush down every week, there can be teflon coatings, agri-alloys, a brush so the robot can tidy tools, a stethoscope audio sensor.

1