Recent comments in /f/MachineLearning
lppier2 t1_jbvhlwk wrote
Thanks, I’m more interested in real world examples of how these two models compare. If sentence transformers can give me the same kind of embedding a, why am I paying OpenAI for the ada embeddings?
francozzz t1_jbvhf9n wrote
Reply to [P] Introducing confidenceinterval, the long missing python library for computing confidence intervals by jacobgil
I’ve just been asked to use confidence intervals for a project I’m working at, this comes as a godsend! Thanks!
jonnyyen t1_jbvhdvn wrote
Reply to [P] Introducing confidenceinterval, the long missing python library for computing confidence intervals by jacobgil
Nice to see a python implementation of deLong's method - I've had to use pROC (in R) for that in the past. For binary event analysis (among other things) there's also https://github.com/drsteve/PyForecastTools, which also has bootstrapped confidence intervals, or analytic CI using Wald or Agresti-Coull. The terminology is from weather literature, but it covers a lot of the same ground.
PuzzledWhereas991 t1_jbvcei5 wrote
Reply to [P] vanilla-llama an hackable plain-pytorch implementation of LLaMA that can be run on any system (if you have enough resources) by poppear
Which model can I run with 2 3060ti (8gb) and 1 3080 ti (12gb)?
Toilet_Assassin t1_jbva7zk wrote
Reply to comment by ng_guardian in [D] Statsmodels ARIMA model predict function not working by ng_guardian
Try googling 'statsmodels predict' and pay attention to which object the .predict() method is called from.
[deleted] t1_jbuz2cb wrote
Reply to comment by [deleted] in [P] GITModel: Dynamically generate high-quality hierarchical topic tree representations of GitHub repositories using customizable GNN message passing layers, chatgpt, and topic modeling. by NovelspaceOnly
Valid point though, I need some way to convert it to capital so I can pay my bills
[deleted] t1_jbuyv62 wrote
Reply to comment by LikeForeheadBut in [P] GITModel: Dynamically generate high-quality hierarchical topic tree representations of GitHub repositories using customizable GNN message passing layers, chatgpt, and topic modeling. by NovelspaceOnly
I’m trying to build wealth in the form of information.
Valuable-Kick7312 t1_jbuwppx wrote
Reply to [P] Introducing confidenceinterval, the long missing python library for computing confidence intervals by jacobgil
Cool! This always assume that the data is drawn iid?
ng_guardian OP t1_jbuwfbd wrote
Reply to comment by Toilet_Assassin in [D] Statsmodels ARIMA model predict function not working by ng_guardian
How do I overwrite it? That is all the documentation says
Safe-Celebration-220 t1_jbuswo7 wrote
I think it’s because humans have multiple neural networks that connect together. Humans have a neural network to understand sight, smell, sound, touch, and taste that all are combined to create one neural network that is interconnected within itself. If you took a human brain that had no experience of anything, and could not process any of the 5 senses and was only able to process language than it would take that human brain billions of texts in order for it to be able to write sentences down. If GPT-3 had a neural network for all 5 senses and you taught it information based off of all of those senses than it could make connections that were previously impossible. GPT-3 will take a word and connect that word with other words to see how it fits into the context of a sentence but humans will take a word and see how that word connects with each and every sense and that takes less information to learn. A human can learn language faster by taking the things they see and connecting them with what the language they are learning. If a human could not connect their sight with language than learning that language becomes much much harder. So the challenge we face right now is learning to connect neural networks with other neural networks in the same way a human connects their neural networks.
Kaleidophon t1_jbus2rm wrote
Reply to [P] Introducing confidenceinterval, the long missing python library for computing confidence intervals by jacobgil
Very neat! I will add this to https://github.com/Kaleidophon/experimental-standards-deep-learning-research :-) Maybe you can also add citation info in case people want to refer to the package in their publication?
fastglow t1_jbupkr9 wrote
Reply to [P] Introducing confidenceinterval, the long missing python library for computing confidence intervals by jacobgil
Very cool. Thanks for making this.
Toilet_Assassin t1_jbujz0i wrote
Read the documentation, you are using it incorrectly.
onebigcat OP t1_jbuj0qk wrote
Reply to comment by [deleted] in [D] Unsupervised Learning — have there been any big advances recently? by onebigcat
I appreciate the insight! I’m new to ML (coming from the bio research side of things) and trying to keep up.
Real_Revenue_4741 t1_jbuhm8y wrote
Reply to comment by science-raven in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
In essence, "interacting with an object with an end effector" requires a lot of precision. It is more difficult than it seems to get it working on all types of weeds/plants. Weeding/digging requires a specific motion that may be difficult to accomplish without tactile feedback--it is not as simple as putting the tool at the right location. Irrigation may be easier because there is not much interaction with the environment required. It will be pretty simple to get a system that works with suboptimal performance, but this would be not be enough to automate gardening without human intervention.
LikeForeheadBut t1_jbug6d4 wrote
Kyle-Boi t1_jbug5j7 wrote
Wtf does that even mean?
marboka t1_jbue3u3 wrote
Reply to comment by onebigcat in [D] Unsupervised Learning — have there been any big advances recently? by onebigcat
DINO by facebook, STEGO by microsoft
[deleted] t1_jbudzup wrote
[deleted]
MrTacobeans t1_jbu7nw4 wrote
Reply to comment by science-raven in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
Regardless of the possibilities of this being possible this robot even at economies of scale looks more like 15-20k. Advanced lawn roombas are already in the 2-5k range a fully autonomous lawn/garden maintenance bot will never be below 5k... Unless it's just running around spraying water on stuff. The arm in your video alone would likely cost atleast 2-3k after R&D is factored in
pyepyepie t1_jbu75ec wrote
Reply to comment by Simusid in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Let's agree to disagree. Your example shows random data while I talk about how much of the information your plot actually shows after dimensionality reduction (you can't know).
Honestly, I am not sure what your work actually means since the details are kept secret - I think you can shut my mouth by reporting a little more or releasing the data, but more importantly - it would make your work a significant contribution.
Edit: I would like to see a comparison of the plot with a very simple method, e.g. mean of word embeddings. My hypothesis is that it will look similar as well.
polandtown t1_jbu56lb wrote
Reply to comment by Simusid in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Thanks!
Simusid OP t1_jbu5594 wrote
Reply to comment by pyepyepie in [Discussion] Compare OpenAI and SentenceTransformer Sentence Embeddings by Simusid
Actually the curated dataset (ref github in original post) is almost perfectly balanced. And yes, sentence embeddings is probably the SOTA approach today.
I agree that when I say the graphs "seems similar", that is a very qualitative label. However I would not say it "means nothing". At the far extreme if you plot:
x = UMAP().fit(np.random.random((10000,75)))
plt.scatter(x.embedding_[:,0], x.embedding_[:,1], s=1)
You will get "hot garbage", a big blob. My goal, and my only goal was to visually see how "blobby" OpenAI was vs ST. And clearly they are visually similar.
science-raven OP t1_jbu4m9q wrote
Reply to comment by deephugs in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
If you spend a moment on YT to see the latest projects, there are quadcopters that pick apples, and many awesome fruit picking demonstrations. AI is fanning out into many fields.
Technologies can come late because they have been missed: electronic cigarettes could have existed since the 1930's when propylene glycol was used for medicines.
For the grit, yes it's tricky, the robot can ask for a brush down every week, there can be teflon coatings, agri-alloys, a brush so the robot can tidy tools, a stethoscope audio sensor.
science-raven OP t1_jbvlacq wrote
Reply to comment by Real_Revenue_4741 in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
There's two types of weeding, the most common is the huge quantities of seedlings that come up in new soil. That's dizzyingly easy for a human, and it's not too difficult for a robot. It's too repetitive for a human. The difficult types of weeds are those that have to be drilled, because humans don't have drills and mapping ability.
Drilling soil using an auger is actually a back and forth movement using just one vector on a single motor. The arm has at least 1 force sensor in the tool piece.